query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
26
| negative_passages
listlengths 7
100
| subset
stringclasses 7
values |
|---|---|---|---|---|
b0c429d2600073cac40209bcd9c28b55
|
Fast Image Inpainting Based on Coherence Transport
|
[
{
"docid": "da237e14a3a9f6552fc520812073ee6c",
"text": "Shock filters are based in the idea to apply locally either a dilation or an erosion process, depending on whether the pixel belongs to the influence zone of a maximum or a minimum. They create a sharp shock between two influence zones and produce piecewise constant segmentations. In this paper we design specific shock filters for the enhancement of coherent flow-like structures. They are based on the idea to combine shock filtering with the robust orientation estimation by means of the structure tensor. Experiments with greyscale and colour images show that these novel filters may outperform previous shock filters as well as coherence-enhancing diffusion filters.",
"title": ""
}
] |
[
{
"docid": "124fa48e1e842f2068a8fb55a2b8bb8e",
"text": "We present an augmented reality application for mechanics education. It utilizes a recent physics engine developed for the PC gaming market to simulate physical experiments in the domain of mechanics in real time. Students are enabled to actively build own experiments and study them in a three-dimensional virtual world. A variety of tools are provided to analyze forces, mass, paths and other properties of objects before, during and after experiments. Innovative teaching content is presented that exploits the strengths of our immersive virtual environment. PhysicsPlayground serves as an example of how current technologies can be combined to deliver a new quality in physics education.",
"title": ""
},
{
"docid": "93bad64439be375200cce65a37c6b8c6",
"text": "The mobile social network (MSN) combines techniques in social science and wireless communications for mobile networking. The MSN can be considered as a system which provides a variety of data delivery services involving the social relationship among mobile users. This paper presents a comprehensive survey on the MSN specifically from the perspectives of applications, network architectures, and protocol design issues. First, major applications of the MSN are reviewed. Next, different architectures of the MSN are presented. Each of these different architectures supports different data delivery scenarios. The unique characteristics of social relationship in MSN give rise to different protocol design issues. These research issues (e.g., community detection, mobility, content distribution, content sharing protocols, and privacy) and the related approaches to address data delivery in the MSN are described. At the end, several important research directions are outlined.",
"title": ""
},
{
"docid": "d5019a5536950482e166d68dc3a7cac7",
"text": "Co-contamination of the environment with toxic chlorinated organic and heavy metal pollutants is one of the major problems facing industrialized nations today. Heavy metals may inhibit biodegradation of chlorinated organics by interacting with enzymes directly involved in biodegradation or those involved in general metabolism. Predictions of metal toxicity effects on organic pollutant biodegradation in co-contaminated soil and water environments is difficult since heavy metals may be present in a variety of chemical and physical forms. Recent advances in bioremediation of co-contaminated environments have focussed on the use of metal-resistant bacteria (cell and gene bioaugmentation), treatment amendments, clay minerals and chelating agents to reduce bioavailable heavy metal concentrations. Phytoremediation has also shown promise as an emerging alternative clean-up technology for co-contaminated environments. However, despite various investigations, in both aerobic and anaerobic systems, demonstrating that metal toxicity hampers the biodegradation of the organic component, a paucity of information exists in this area of research. Therefore, in this review, we discuss the problems associated with the degradation of chlorinated organics in co-contaminated environments, owing to metal toxicity and shed light on possible improvement strategies for effective bioremediation of sites co-contaminated with chlorinated organic compounds and heavy metals.",
"title": ""
},
{
"docid": "a009519d1ed930d40db593542e7c3e0d",
"text": "With the increasing adoption of NoSQL data base systems like MongoDB or CouchDB more and more applications store structured data according to a non-relational, document oriented model. Exposing this structured data as Linked Data is currently inhibited by a lack of standards as well as tools and requires the implementation of custom solutions. While recent efforts aim at expressing transformations of such data models into RDF in a standardized manner, there is a lack of approaches which facilitate SPARQL execution over mapped non-relational data sources. With SparqlMap-M we show how dynamic SPARQL access to non-relational data can be achieved. SparqlMap-M is an extension to our SPARQL-to-SQL rewriter SparqlMap that performs a (partial) transformation of SPARQL queries by using a relational abstraction over a document store. Further, duplicate data in the document store is used to reduce the number of joins and custom optimizations are introduced. Our showcase scenario employs the Berlin SPARQL Benchmark (BSBM) with different adaptions to a document data model. We use this scenario to demonstrate the viability of our approach and compare it to different MongoDB setups and native SQL.",
"title": ""
},
{
"docid": "de1529bcfee8a06969ee35318efe3dc3",
"text": "This paper studies the prediction of head pose from still images, and summarizes the outcome of a recently organized competition, where the task was to predict the yaw and pitch angles of an image dataset with 2790 samples with known angles. The competition received 292 entries from 52 participants, the best ones clearly exceeding the state-of-the-art accuracy. In this paper, we present the key methodologies behind selected top methods, summarize their prediction accuracy and compare with the current state of the art.",
"title": ""
},
{
"docid": "95bbe5d13f3ca5f97d01f2692a9dc77a",
"text": "Moringa oleifera Lam. (family; Moringaceae), commonly known as drumstick, have been used for centuries as a part of the Ayurvedic system for several diseases without having any scientific data. Demineralized water was used to prepare aqueous extract by maceration for 24 h and complete metabolic profiling was performed using GC-MS and HPLC. Hypoglycemic properties of extract have been tested on carbohydrate digesting enzyme activity, yeast cell uptake, muscle glucose uptake, and intestinal glucose absorption. Type 2 diabetes was induced by feeding high-fat diet (HFD) for 8 weeks and a single injection of streptozotocin (STZ, 45 mg/kg body weight, intraperitoneally) was used for the induction of type 1 diabetes. Aqueous extract of M. oleifera leaf was given orally at a dose of 100 mg/kg to STZ-induced rats and 200 mg/kg in HFD mice for 3 weeks after diabetes induction. Aqueous extract remarkably inhibited the activity of α-amylase and α-glucosidase and it displayed improved antioxidant capacity, glucose tolerance and rate of glucose uptake in yeast cell. In STZ-induced diabetic rats, it produces a maximum fall up to 47.86% in acute effect whereas, in chronic effect, it was 44.5% as compared to control. The fasting blood glucose, lipid profile, liver marker enzyme level were significantly (p < 0.05) restored in both HFD and STZ experimental model. Multivariate principal component analysis on polar and lipophilic metabolites revealed clear distinctions in the metabolite pattern in extract and in blood after its oral administration. Thus, the aqueous extract can be used as phytopharmaceuticals for the management of diabetes by using as adjuvants or alone.",
"title": ""
},
{
"docid": "2efe5c0228e6325cdbb8e0922c19924f",
"text": "Patient interactions with health care providers result in entries to electronic health records (EHRs). EHRs were built for clinical and billing purposes but contain many data points about an individual. Mining these records provides opportunities to extract electronic phenotypes that can be paired with genetic data to identify genes underlying common human diseases. This task remains challenging: high quality phenotyping is costly and requires physician review; many fields in the records are sparsely filled; and our definitions of diseases are continuing to improve over time. Here we develop and evaluate a semi-supervised learning method for EHR phenotype extraction using denoising autoencoders for phenotype stratification. By combining denoising autoencoders with random forests we find classification improvements across simulation models, particularly in cases where only a small number of patients have high quality phenotype. This situation is commonly encountered in research with EHRs. Denoising autoencoders perform dimensionality reduction allowing visualization and clustering for the discovery of new subtypes of disease. This method represents a promising approach to clarify disease subtypes and improve genotype-phenotype association studies that leverage EHRs.",
"title": ""
},
{
"docid": "94bd0b242079d2b82c141e9f117154f7",
"text": "BACKGROUND\nNewborns with critical health conditions are monitored in neonatal intensive care units (NICU). In NICU, one of the most important problems that they face is the risk of brain injury. There is a need for continuous monitoring of newborn's brain function to prevent any potential brain injury. This type of monitoring should not interfere with intensive care of the newborn. Therefore, it should be non-invasive and portable.\n\n\nMETHODS\nIn this paper, a low-cost, battery operated, dual wavelength, continuous wave near infrared spectroscopy system for continuous bedside hemodynamic monitoring of neonatal brain is presented. The system has been designed to optimize SNR by optimizing the wavelength-multiplexing parameters with special emphasis on safety issues concerning burn injuries. SNR improvement by utilizing the entire dynamic range has been satisfied with modifications in analog circuitry.\n\n\nRESULTS AND CONCLUSION\nAs a result, a shot-limited SNR of 67 dB has been achieved for 10 Hz temporal resolution. The system can operate more than 30 hours without recharging when an off-the-shelf 1850 mAh-7.2 V battery is used. Laboratory tests with optical phantoms and preliminary data recorded in NICU demonstrate the potential of the system as a reliable clinical tool to be employed in the bedside regional monitoring of newborn brain metabolism under intensive care.",
"title": ""
},
{
"docid": "82cf8d72eebcc7cfa424cf09ed80d025",
"text": "Along with its numerous benefits, the Internet also created numerous ways to compromise the security and stability of the systems connected to it. In 2003, 137529 incidents were reported to CERT/CC © while in 1999, there were 9859 reported incidents (CERT/CC©, 2003). Operations, which are primarily designed to protect the availability, confidentiality and integrity of critical network information systems, are considered to be within the scope of security management. Security management operations protect computer networks against denial-of-service attacks, unauthorized disclosure of information, and the modification or destruction of data. Moreover, the automated detection and immediate reporting of these events are required in order to provide the basis for a timely response to attacks (Bass, 2000). Security management plays an important, albeit often neglected, role in network management tasks.",
"title": ""
},
{
"docid": "020799a5f143063b843aaf067f52cf29",
"text": "In this paper we propose a novel entity annotator for texts which hinges on TagME's algorithmic technology, currently the best one available. The novelty is twofold: from the one hand, we have engineered the software in order to be modular and more efficient; from the other hand, we have improved the annotation pipeline by re-designing all of its three main modules: spotting, disambiguation and pruning. In particular, the re-design has involved the detailed inspection of the performance of these modules by developing new algorithms which have been in turn tested over all publicly available datasets (i.e. AIDA, IITB, MSN, AQUAINT, and the one of the ERD Challenge). This extensive experimentation allowed us to derive the best combination which achieved on the ERD development dataset an F1 score of 74.8%, which turned to be 67.2% F1 for the test dataset. This final result was due to an impressive precision equal to 87.6%, but very low recall 54.5%. With respect to classic TagME on the development dataset the improvement ranged from 1% to 9% on the D2W benchmark, depending on the disambiguation algorithm being used. As a side result, the final software can be interpreted as a flexible library of several parsing/disambiguation and pruning modules that can be used to build up new and more sophisticated entity annotators. We plan to release our library to the public as an open-source project.",
"title": ""
},
{
"docid": "ab8599cbe4b906cea6afab663cbe2caf",
"text": "Real-time ETL and data warehouse multidimensional modeling (DMM) of business operational data has become an important research issue in the area of real-time data warehousing (RTDW). In this study, some of the recently proposed real-time ETL technologies from the perspectives of data volumes, frequency, latency, and mode have been discussed. In addition, we highlight several advantages of using semi-structured DMM (i.e. XML) in RTDW instead of traditional structured DMM (i.e., relational). We compare the two DMMs on the basis of four characteristics: heterogeneous data integration, types of measures supported, aggregate query processing, and incremental maintenance. We implemented the RTDW framework for an example telecommunication organization. Our experimental analysis shows that if the delay comes from the incremental maintenance of DMM, no ETL technology (full-reloading or incremental-loading) can help in real-time business intelligence.",
"title": ""
},
{
"docid": "f9d91253c5c276bb020daab4a4127822",
"text": "Conveying a narrative with visualizations often requires choosing an order in which to present visualizations. While evidence exists that narrative sequencing in traditional stories can affect comprehension and memory, little is known about how sequencing choices affect narrative visualization. We consider the forms and reactions to sequencing in narrative visualization presentations to provide a deeper understanding with a focus on linear, 'slideshow-style' presentations. We conduct a qualitative analysis of 42 professional narrative visualizations to gain empirical knowledge on the forms that structure and sequence take. Based on the results of this study we propose a graph-driven approach for automatically identifying effective sequences in a set of visualizations to be presented linearly. Our approach identifies possible transitions in a visualization set and prioritizes local (visualization-to-visualization) transitions based on an objective function that minimizes the cost of transitions from the audience perspective. We conduct two studies to validate this function. We also expand the approach with additional knowledge of user preferences for different types of local transitions and the effects of global sequencing strategies on memory, preference, and comprehension. Our results include a relative ranking of types of visualization transitions by the audience perspective and support for memory and subjective rating benefits of visualization sequences that use parallelism as a structural device. We discuss how these insights can guide the design of narrative visualization and systems that support optimization of visualization sequence.",
"title": ""
},
{
"docid": "fb97b11eba38f84f38b473a09119162a",
"text": "We show how to encrypt a relational database in such a way that it can efficiently support a large class of SQL queries. Our construction is based solely on structured encryption and does not make use of any property-preserving encryption (PPE) schemes such as deterministic and order-preserving encryption. As such, our approach leaks considerably less than PPE-based solutions which have recently been shown to reveal a lot of information in certain settings (Naveed et al., CCS ’15 ). Our construction achieves asymptotically optimal query complexity under very natural conditions on the database and queries.",
"title": ""
},
{
"docid": "3ef6a2d1c125d5c7edf60e3ceed23317",
"text": "This paper introduces a Monte-Carlo algorithm for online planning in large POMDPs. The algorithm combines a Monte-Carlo update of the agent’s belief state with a Monte-Carlo tree search from the current belief state. The new algorithm, POMCP, has two important properties. First, MonteCarlo sampling is used to break the curse of dimensionality both during belief state updates and during planning. Second, only a black box simulator of the POMDP is required, rather than explicit probability distributions. These properties enable POMCP to plan effectively in significantly larger POMDPs than has previously been possible. We demonstrate its effectiveness in three large POMDPs. We scale up a well-known benchmark problem, rocksample, by several orders of magnitude. We also introduce two challenging new POMDPs: 10 × 10 battleship and partially observable PacMan, with approximately 10 and 10 states respectively. Our MonteCarlo planning algorithm achieved a high level of performance with no prior knowledge, and was also able to exploit simple domain knowledge to achieve better results with less search. POMCP is the first general purpose planner to achieve high performance in such large and unfactored POMDPs.",
"title": ""
},
{
"docid": "7ec5faf2081790e7baa1832d5f9ab5bd",
"text": "Text detection in complex background images is a challenging task for intelligent vehicles. Actually, almost all the widely-used systems focus on commonly used languages while for some minority languages, such as the Uyghur language, text detection is paid less attention. In this paper, we propose an effective Uyghur language text detection system in complex background images. First, a new channel-enhanced maximally stable extremal regions (MSERs) algorithm is put forward to detect component candidates. Second, a two-layer filtering mechanism is designed to remove most non-character regions. Third, the remaining component regions are connected into short chains, and the short chains are extended by a novel extension algorithm to connect the missed MSERs. Finally, a two-layer chain elimination filter is proposed to prune the non-text chains. To evaluate the system, we build a new data set by various Uyghur texts with complex backgrounds. Extensive experimental comparisons show that our system is obviously effective for Uyghur language text detection in complex background images. The F-measure is 85%, which is much better than the state-of-the-art performance of 75.5%.",
"title": ""
},
{
"docid": "ee46ee9e45a87c111eb14397c99cd653",
"text": "This is a review of unsupervised learning applied to videos with the aim of learning visual representations. We look at different realizations of the notion of temporal coherence across various models. We try to understand the challenges being faced, the strengths and weaknesses of different approaches and identify directions for future work. Unsupervised Learning of Visual Representations using Videos Nitish Srivastava Department of Computer Science, University of Toronto",
"title": ""
},
{
"docid": "9665328d7993e2b1298a2c849c987979",
"text": "The case study presented here, deals with the subject of second language acquisition making at the same time an effort to show as much as possible how L1 was acquired and the ways L1 affected L2, through the process of examining a Greek girl who has been exposed to the English language from the age of eight. Furthermore, I had the chance to analyze the method used by the frontistirio teachers and in what ways this method helps or negatively influences children regarding their performance in the four basic skills. We will evaluate the evidence acquired by the girl by studying briefly the basic theories provided by important figures in the field of L2. Finally, I will also include my personal suggestions and the improvement of the child’s abilities and I will state my opinion clearly.",
"title": ""
},
{
"docid": "819693b9acce3dfbb74694733ab4d10f",
"text": "The present research examined how mode of play in an educational mathematics video game impacts learning, performance, and motivation. The game was designed for the practice and automation of arithmetic skills to increase fluency and was adapted to allow for individual, competitive, or collaborative game play. Participants (N 58) from urban middle schools were randomly assigned to each experimental condition. Results suggested that, in comparison to individual play, competition increased in-game learning, whereas collaboration decreased performance during the experimental play session. Although out-of-game math fluency improved overall, it did not vary by condition. Furthermore, competition and collaboration elicited greater situational interest and enjoyment and invoked a stronger mastery goal orientation. Additionally, collaboration resulted in stronger intentions to play the game again and to recommend it to others. Results are discussed in terms of the potential for mathematics learning games and technology to increase student learning and motivation and to demonstrate how different modes of engagement can inform the instructional design of such games.",
"title": ""
},
{
"docid": "a42a19df66ab8827bfcf4c4ee709504d",
"text": "We describe the numerical methods required in our approach to multi-dimensional scaling. The rationale of this approach has appeared previously. 1. Introduction We describe a numerical method for multidimensional scaling. In a companion paper [7] we describe the rationale for our approach to scaling, which is related to that of Shepard [9]. As the numerical methods required are largely unfamiliar to psychologists, and even have elements of novelty within the field of numerical analysis, it seems worthwhile to describe them. In [7] we suppose that there are n objects 1, · · · , n, and that we have experimental values 8;; of dissimilarity between them. For a configuration of points x1 , • • • , x .. in t:-dimensional space, with interpoint distances d;; , we defined the stress of the configuration by The stress is intendoo to be a measure of how well the configuration matches the data. More fully, it is supposed that the \"true\" dissimilarities result from some unknown monotone distortion of the interpoint distances of some \"true\" configuration, and that the observed dissimilarities differ from the true dissimilarities only because of random fluctuation. The stress is essentially the root-mean-square residual departure from this hypothesis. By definition, the best-fitting configuration in t-dimensional space, for a fixed value of t, is that configuration which minimizes the stress. The primary computational problem is to find that configuration. A secondary computational problem, of independent interest, is to find the values of",
"title": ""
},
{
"docid": "7e8b58b88a1a139f9eb6642a69eb697a",
"text": "We present a fully convolutional autoencoder for light fields, which jointly encodes stacks of horizontal and vertical epipolar plane images through a deep network of residual layers. The complex structure of the light field is thus reduced to a comparatively low-dimensional representation, which can be decoded in a variety of ways. The different pathways of upconvolution we currently support are for disparity estimation and separation of the lightfield into diffuse and specular intrinsic components. The key idea is that we can jointly perform unsupervised training for the autoencoder path of the network, and supervised training for the other decoders. This way, we find features which are both tailored to the respective tasks and generalize well to datasets for which only example light fields are available. We provide an extensive evaluation on synthetic light field data, and show that the network yields good results on previously unseen real world data captured by a Lytro Illum camera and various gantries.",
"title": ""
}
] |
scidocsrr
|
6788a1ff9e1df4f3a515adc32d05e2be
|
A REVIEW ON IMAGE SEGMENTATION TECHNIQUES WITH REMOTE SENSING PERSPECTIVE
|
[
{
"docid": "3fa70c2667c6dbe179a7e17e44571727",
"text": "A~tract--For the past decade, many image segmentation techniques have been proposed. These segmentation techniques can be categorized into three classes, (I) characteristic feature thresholding or clustering, (2) edge detection, and (3) region extraction. This survey summarizes some of these techniques, in the area of biomedical image segmentation, most proposed techniques fall into the categories of characteristic feature thresholding or clustering and edge detection.",
"title": ""
},
{
"docid": "d984489b4b71eabe39ed79fac9cf27a1",
"text": "Remote sensing from airborne and spaceborne platforms provides valuable data for mapping, environmental monitoring, disaster management and civil and military intelligence. However, to explore the full value of these data, the appropriate information has to be extracted and presented in standard format to import it into geo-information systems and thus allow efficient decision processes. The object-oriented approach can contribute to powerful automatic and semiautomatic analysis for most remote sensing applications. Synergetic use to pixel-based or statistical signal processing methods explores the rich information contents. Here, we explain principal strategies of object-oriented analysis, discuss how the combination with fuzzy methods allows implementing expert knowledge and describe a representative example for the proposed workflow from remote sensing imagery to GIS. The strategies are demonstrated using the first objectoriented image analysis software on the market, eCognition, which provides an appropriate link between remote sensing",
"title": ""
}
] |
[
{
"docid": "451434f1181c021eb49442d6eb6617c5",
"text": "In this paper, we use variational recurrent neural network to investigate the anomaly detection problem on graph time series. The temporal correlation is modeled by the combination of recurrent neural network (RNN) and variational inference (VI), while the spatial information is captured by the graph convolutional network. In order to incorporate external factors, we use feature extractor to augment the transition of latent variables, which can learn the influence of external factors. With the target function as accumulative ELBO, it is easy to extend this model to on-line method. The experimental study on traffic flow data shows the detection capability of the proposed method.",
"title": ""
},
{
"docid": "9c67b538a5e6806273b26d9c5899ef42",
"text": "Back propagation training algorithms have been implemented by many researchers for their own purposes and provided publicly on the internet for others to use in veriication of published results and for reuse in unrelated research projects. Often, the source code of a package is used as the basis for a new package for demonstrating new algorithm variations, or some functionality is added speciically for analysis of results. However, there are rarely any guarantees that the original implementation is faithful to the algorithm it represents, or that its code is bug free or accurate. This report attempts to look at a few implementations and provide a test suite which shows deeciencies in some software available which the average researcher may not be aware of, and may not have the time to discover on their own. This test suite may then be used to test the correctness of new packages.",
"title": ""
},
{
"docid": "082894a8498a5c22af8903ad8ea6399a",
"text": "Despite the proliferation of mobile health applications, few target low literacy users. This is a matter of concern because 43% of the United States population is functionally illiterate. To empower everyone to be a full participant in the evolving health system and prevent further disparities, we must understand the design needs of low literacy populations. In this paper, we present two complementary studies of four graphical user interface (GUI) widgets and three different cross-page navigation styles in mobile applications with a varying literacy, chronically-ill population. Participant's navigation and interaction styles were documented while they performed search tasks using high fidelity prototypes running on a mobile device. Results indicate that participants could use any non-text based GUI widgets. For navigation structures, users performed best when navigating a linear structure, but preferred the features of cross-linked navigation. Based on these findings, we provide some recommendations for designing accessible mobile applications for varying-literacy populations.",
"title": ""
},
{
"docid": "80ece123483d6de02c4e621bdb8eb0fc",
"text": "Resistive-switching memory (RRAM) based on transition metal oxides is a potential candidate for replacing Flash and dynamic random access memory in future generation nodes. Although very promising from the standpoints of scalability and technology, RRAM still has severe drawbacks in terms of understanding and modeling of the resistive-switching mechanism. This paper addresses the modeling of resistive switching in bipolar metal-oxide RRAMs. Reset and set processes are described in terms of voltage-driven ion migration within a conductive filament generated by electroforming. Ion migration is modeled by drift–diffusion equations with Arrhenius-activated diffusivity and mobility. The local temperature and field are derived from the self-consistent solution of carrier and heat conduction equations in a 3-D axis-symmetric geometry. The model accounts for set–reset characteristics, correctly describing the abrupt set and gradual reset transitions and allowing scaling projections for metal-oxide RRAM.",
"title": ""
},
{
"docid": "73cee52ebbb10167f7d32a49d1243af6",
"text": "We consider the problem of a robot learning the mechanical properties of objects through physical interaction with the object, and introduce a practical, data-efficient approach for identifying the motion models of these objects. The proposed method utilizes a physics engine, where the robot seeks to identify the inertial and friction parameters of the object by simulating its motion under different values of the parameters and identifying those that result in a simulation which matches the observed real motions. The problem is solved in a Bayesian optimization framework. The same framework is used for both identifying the model of an object online and searching for a policy that would minimize a given cost function according to the identified model. Experimental results both in simulation and using a real robot indicate that the proposed method outperforms state-of-the-art model-free reinforcement learning approaches.",
"title": ""
},
{
"docid": "e64caf71b75ac93f0426b199844f319b",
"text": "INTRODUCTION\nVaginismus is mostly unknown among clinicians and women. Vaginismus causes women to have fear, anxiety, and pain with penetration attempts.\n\n\nAIM\nTo present a large cohort of patients based on prior published studies approved by an institutional review board and the Food and Drug Administration using a comprehensive multimodal vaginismus treatment program to treat the physical and psychologic manifestations of women with vaginismus and to record successes, failures, and untoward effects of this treatment approach.\n\n\nMETHODS\nAssessment of vaginismus included a comprehensive pretreatment questionnaire, the Female Sexual Function Index (FSFI), and consultation. All patients signed a detailed informed consent. Treatment consisted of a multimodal approach including intravaginal injections of onabotulinumtoxinA (Botox) and bupivacaine, progressive dilation under conscious sedation, indwelling dilator, follow-up and support with office visits, phone calls, e-mails, dilation logs, and FSFI reports.\n\n\nMAIN OUTCOME MEASURES\nLogs noting dilation progression, pain and anxiety scores, time to achieve intercourse, setbacks, and untoward effects. Post-treatment FSFI scores were compared with preprocedure scores.\n\n\nRESULTS\nOne hundred seventy-one patients (71%) reported having pain-free intercourse at a mean of 5.1 weeks (median = 2.5). Six patients (2.5%) were unable to achieve intercourse within a 1-year period after treatment and 64 patients (26.6%) were lost to follow-up. The change in the overall FSFI score measured at baseline, 3 months, 6 months, and 1 year was statistically significant at the 0.05 level. Three patients developed mild temporary stress incontinence, two patients developed a short period of temporary blurred vision, and one patient developed temporary excessive vaginal dryness. All adverse events resolved by approximately 4 months. One patient required retreatment followed by successful coitus.\n\n\nCONCLUSION\nA multimodal program that treated the physical and psychologic aspects of vaginismus enabled women to achieve pain-free intercourse as noted by patient communications and serial female sexual function studies. Further studies are indicated to better understand the individual components of this multimodal treatment program. Pacik PT, Geletta S. Vaginismus Treatment: Clinical Trials Follow Up 241 Patients. Sex Med 2017;5:e114-e123.",
"title": ""
},
{
"docid": "d9a99642b106ad3f63134916bd75329b",
"text": "We extend Convolutional Neural Networks (CNNs) on flat and regular domains (e.g. 2D images) to curved 2D manifolds embedded in 3D Euclidean space that are discretized as irregular surface meshes and widely used to represent geometric data in Computer Vision and Graphics. We define surface convolution on tangent spaces of a surface domain, where the convolution has two desirable properties: 1) the distortion of surface domain signals is locally minimal when being projected to the tangent space, and 2) the translation equi-variance property holds locally, by aligning tangent spaces for neighboring points with the canonical torsion-free parallel transport that preserves tangent space metric. To implement such a convolution, we rely on a parallel N -direction frame field on the surface that minimizes the field variation and therefore is as compatible as possible to and approximates the parallel transport. On the tangent spaces equipped with parallel frames, the computation of surface convolution becomes standard routine. The tangential frames have N rotational symmetry that must be disambiguated, which we resolve by duplicating the surface domain to construct its covering space induced by the parallel frames and grouping the feature maps into N sets accordingly; each surface convolution is computed on the N branches of the cover space with their respective feature maps while the kernel weights are shared. To handle the irregular data points of a discretized surface mesh while being able to share trainable kernel weights, we make the convolution semi-discrete, i.e. the convolution kernels are smooth polynomial functions, and their convolution with discrete surface data points becomes discrete sampling and weighted summation. In addition, pooling and unpooling operations for surface CNNs on a mesh are computed along the mesh hierarchy built through simplification. The presented surface-based CNNs allow us to do effective deep learning on surface meshes using network structures very similar to those for flat and regular domains. In particular, we show that for various tasks, including classification, segmentation and non-rigid registration, surface CNNs using only raw input signals achieve superior performances than other neural network models using sophisticated pre-computed input features, and enable a simple non-rigid human-body registration procedure by regressing to restpose positions directly.",
"title": ""
},
{
"docid": "6b3c8e869651690193e66bc2524c1f56",
"text": "Convolutional Neural Networks (CNNs) have been widely used for face recognition and got extraordinary performance with large number of available face images of different people. However, it is hard to get uniform distributed data for all people. In most face datasets, a large proportion of people have few face images. Only a small number of people appear frequently with more face images. These people with more face images have higher impact on the feature learning than others. The imbalanced distribution leads to the difficulty to train a CNN model for feature representation that is general for each person, instead of mainly for the people with large number of face images. To address this challenge, we proposed a center invariant loss which aligns the center of each person to enforce the learned features to have a general representation for all people. The center invariant loss penalizes the difference between each center of classes. With center invariant loss, we can train a robust CNN that treats each class equally regardless the number of class samples. Extensive experiments demonstrate the effectiveness of the proposed approach. We achieve state-of-the-art results on LFW and YTF datasets.",
"title": ""
},
{
"docid": "266625d5f1c658849d34514d5dc9586f",
"text": "Hand written digit recognition is highly nonlinear problem. Recognition of handwritten numerals plays an active role in day to day life now days. Office automation, e-governors and many other areas, reading printed or handwritten documents and convert them to digital media is very crucial and time consuming task. So the system should be designed in such a way that it should be capable of reading handwritten numerals and provide appropriate response as humans do. However, handwritten digits are varying from person to person because each one has their own style of writing, means the same digit or character/word written by different writer will be different even in different languages. This paper presents survey on handwritten digit recognition systems with recent techniques, with three well known classifiers namely MLP, SVM and k-NN used for classification. This paper presents comparative analysis that describes recent methods and helps to find future scope.",
"title": ""
},
{
"docid": "94189593d39be7c5e5411482c7b996e3",
"text": "In this paper, interval-valued fuzzy planar graphs are defined and several properties are studied. The interval-valued fuzzy graphs are more efficient than fuzzy graphs, since the degree of membership of vertices and edges lie within the interval [0, 1] instead at a point in fuzzy graphs. We also use the term ‘degree of planarity’ to measures the nature of planarity of an interval-valued fuzzy graph. The other relevant terms such as strong edges, interval-valued fuzzy faces, strong interval-valued fuzzy faces are defined here. The interval-valued fuzzy dual graph which is closely associated to the interval-valued fuzzy planar graph is defined. Several properties of interval-valued fuzzy dual graph are also studied. An example of interval-valued fuzzy planar graph is given.",
"title": ""
},
{
"docid": "5441c49359d4446a51cea3f13991a7dc",
"text": "Nowadays, smart composite materials embed miniaturized sensors for structural health monitoring (SHM) in order to mitigate the risk of failure due to an overload or to unwanted inhomogeneity resulting from the fabrication process. Optical fiber sensors, and more particularly fiber Bragg grating (FBG) sensors, outperform traditional sensor technologies, as they are lightweight, small in size and offer convenient multiplexing capabilities with remote operation. They have thus been extensively associated to composite materials to study their behavior for further SHM purposes. This paper reviews the main challenges arising from the use of FBGs in composite materials. The focus will be made on issues related to temperature-strain discrimination, demodulation of the amplitude spectrum during and after the curing process as well as connection between the embedded optical fibers and the surroundings. The main strategies developed in each of these three topics will be summarized and compared, demonstrating the large progress that has been made in this field in the past few years.",
"title": ""
},
{
"docid": "a354b6c03cadf539ccd01a247447ebc1",
"text": "In the present study, we tested in vitro different parts of 35 plants used by tribals of the Similipal Biosphere Reserve (SBR, Mayurbhanj district, India) for the management of infections. From each plant, three extracts were prepared with different solvents (water, ethanol, and acetone) and tested for antimicrobial (E. coli, S. aureus, C. albicans); anthelmintic (C. elegans); and antiviral (enterovirus 71) bioactivity. In total, 35 plant species belonging to 21 families were recorded from tribes of the SBR and periphery. Of the 35 plants, eight plants (23%) showed broad-spectrum in vitro antimicrobial activity (inhibiting all three test strains), while 12 (34%) exhibited narrow spectrum activity against individual pathogens (seven as anti-staphylococcal and five as anti-candidal). Plants such as Alangium salviifolium, Antidesma bunius, Bauhinia racemosa, Careya arborea, Caseria graveolens, Cleistanthus patulus, Colebrookea oppositifolia, Crotalaria pallida, Croton roxburghii, Holarrhena pubescens, Hypericum gaitii, Macaranga peltata, Protium serratum, Rubus ellipticus, and Suregada multiflora showed strong antibacterial effects, whilst Alstonia scholaris, Butea monosperma, C. arborea, C. pallida, Diospyros malbarica, Gmelina arborea, H. pubescens, M. peltata, P. serratum, Pterospermum acerifolium, R. ellipticus, and S. multiflora demonstrated strong antifungal activity. Plants such as A. salviifolium, A. bunius, Aporosa octandra, Barringtonia acutangula, C. graveolens, C. pallida, C. patulus, G. arborea, H. pubescens, H. gaitii, Lannea coromandelica, M. peltata, Melastoma malabathricum, Millettia extensa, Nyctanthes arbor-tristis, P. serratum, P. acerifolium, R. ellipticus, S. multiflora, Symplocos cochinchinensis, Ventilago maderaspatana, and Wrightia arborea inhibit survival of C. elegans and could be a potential source for anthelmintic activity. Additionally, plants such as A. bunius, C. graveolens, C. patulus, C. oppositifolia, H. gaitii, M. extensa, P. serratum, R. ellipticus, and V. maderaspatana showed anti-enteroviral activity. Most of the plants, whose traditional use as anti-infective agents by the tribals was well supported, show in vitro inhibitory activity against an enterovirus, bacteria (E. coil, S. aureus), a fungus (C. albicans), or a nematode (C. elegans).",
"title": ""
},
{
"docid": "18a317b8470b4006ccea0e436f54cfcd",
"text": "Device-to-device communications enable two proximity users to transmit signal directly without going through the base station. It can increase network spectral efficiency and energy efficiency, reduce transmission delay, offload traffic for the BS, and alleviate congestion in the cellular core networks. However, many technical challenges need to be addressed for D2D communications to harvest the potential benefits, including device discovery and D2D session setup, D2D resource allocation to guarantee QoS, D2D MIMO transmission, as well as D2D-aided BS deployment in heterogeneous networks. In this article, the basic concepts of D2D communications are first introduced, and then existing fundamental works on D2D communications are discussed. In addition, some potential research topics and challenges are also identified.",
"title": ""
},
{
"docid": "c839542db0e80ce253a170a386d91bab",
"text": "Description\nThe American College of Physicians (ACP) developed this guideline to present the evidence and provide clinical recommendations on the management of gout.\n\n\nMethods\nUsing the ACP grading system, the committee based these recommendations on a systematic review of randomized, controlled trials; systematic reviews; and large observational studies published between January 2010 and March 2016. Clinical outcomes evaluated included pain, joint swelling and tenderness, activities of daily living, patient global assessment, recurrence, intermediate outcomes of serum urate levels, and harms.\n\n\nTarget Audience and Patient Population\nThe target audience for this guideline includes all clinicians, and the target patient population includes adults with acute or recurrent gout.\n\n\nRecommendation 1\nACP recommends that clinicians choose corticosteroids, nonsteroidal anti-inflammatory drugs (NSAIDs), or colchicine to treat patients with acute gout. (Grade: strong recommendation, high-quality evidence).\n\n\nRecommendation 2\nACP recommends that clinicians use low-dose colchicine when using colchicine to treat acute gout. (Grade: strong recommendation, moderate-quality evidence).\n\n\nRecommendation 3\nACP recommends against initiating long-term urate-lowering therapy in most patients after a first gout attack or in patients with infrequent attacks. (Grade: strong recommendation, moderate-quality evidence).\n\n\nRecommendation 4\nACP recommends that clinicians discuss benefits, harms, costs, and individual preferences with patients before initiating urate-lowering therapy, including concomitant prophylaxis, in patients with recurrent gout attacks. (Grade: strong recommendation, moderate-quality evidence).",
"title": ""
},
{
"docid": "f1ebd840092228e48a3ab996287e7afd",
"text": "Negative emotions are reliably associated with poorer health (e.g., Kiecolt-Glaser, McGuire, Robles, & Glaser, 2002), but only recently has research begun to acknowledge the important role of positive emotions for our physical health (Fredrickson, 2003). We examine the link between dispositional positive affect and one potential biological pathway between positive emotions and health-proinflammatory cytokines, specifically levels of interleukin-6 (IL-6). We hypothesized that greater trait positive affect would be associated with lower levels of IL-6 in a healthy sample. We found support for this hypothesis across two studies. We also explored the relationship between discrete positive emotions and IL-6 levels, finding that awe, measured in two different ways, was the strongest predictor of lower levels of proinflammatory cytokines. These effects held when controlling for relevant personality and health variables. This work suggests a potential biological pathway between positive emotions and health through proinflammatory cytokines.",
"title": ""
},
{
"docid": "81c59b4a7a59a262f9c270b76ef0f747",
"text": "Single-phase power factor correction (PFC) ac-dc converters are widely used in the industry for ac-dc power conversion from single phase ac-mains to an isolated output dc voltage. Typically, for high-power applications, such converters use an ac-dc boost input converter followed by a dc-dc full-bridge converter. A new ac-dc single-stage high-power universal PFC ac input full-bridge, pulse-width modulated converter is proposed in this paper. The converter can operate with an excellent input power factor, continuous input and output currents, and a non-excessive intermediate dc bus voltage and has reduced number of semiconductor devices thus presenting a cost-effective novel solution for such applications. In this paper, the operation of the proposed converter is explained, a steady-state analysis of its operation is performed, and the results of the analysis are used to develop a procedure for its design. The operation of the proposed converter is confirmed with results obtained from an experimental prototype.",
"title": ""
},
{
"docid": "ec681bc427c66adfad79008840ea9b60",
"text": "With the rapid development of the Computer Science and Technology, It has become a major problem for the users that how to quickly find useful or needed information. Text categorization can help people to solve this question. The feature selection method has become one of the most critical techniques in the field of the text automatic categorization. A new method of the text feature selection based on Information Gain and Genetic Algorithm is proposed in this paper. This method chooses the feature based on information gain with the frequency of items. Meanwhile, for the information filtering systems, this method has been improved fitness function to fully consider the characteristics of weight, text and vector similarity dimension, etc. The experiment has proved that the method can reduce the dimension of text vector and improve the precision of text classification.",
"title": ""
},
{
"docid": "293e2cd2647740bb65849fed003eb4ac",
"text": "In this paper we apply the Local Binary Pattern on Three Orthogonal Planes (LBP-TOP) descriptor to the field of human action recognition. A video sequence is described as a collection of spatial-temporal words after the detection of space-time interest points and the description of the area around them. Our contribution has been in the description part, showing LBP-TOP to be a promising descriptor for human action classification purposes. We have also developed several extensions to the descriptor to enhance its performance in human action recognition, showing the method to be computationally efficient.",
"title": ""
},
{
"docid": "dd60c1f0ae3707cbeb24da1137ee327d",
"text": "Silicone oils have wide range of applications in personal care products due to their unique properties of high lubricity, non-toxicity, excessive spreading and film formation. They are usually employed in the form of emulsions due to their inert nature. Until now, different conventional emulsification techniques have been developed and applied to prepare silicone oil emulsions. The size and uniformity of emulsions showed important influence on stability of droplets, which further affect the application performance. Therefore, various strategies were developed to improve the stability as well as application performance of silicone oil emulsions. In this review, we highlight different factors influencing the stability of silicone oil emulsions and explain various strategies to overcome the stability problems. In addition, the silicone deposition on the surface of hair substrates and different approaches to increase their deposition are also discussed in detail.",
"title": ""
},
{
"docid": "ff41327bad272a6d80d4daba25b6472f",
"text": "The dense very deep submicron (VDSM) system on chips (SoC) face a serious limitation in performance due to reverse scaling of global interconnects. Interconnection techniques which decrease delay, delay variation and ensure signal integrity, play an important role in the growth of the semiconductor industry into future generations. Current-mode low-swing interconnection techniques provide an attractive alternative to conventional full-swing voltage mode signaling in terms of delay, power and noise immunity. In this paper, we present a new current-mode low-swing interconnection technique which reduces the delay and delay variations in global interconnects. Extensive simulations for performance of our circuit under crosstalk, supply voltage, process and temperature variations were performed. The results indicate significant savings in power, reduction in delay and increase in noise immunity compared to other techniques.",
"title": ""
}
] |
scidocsrr
|
44e8caf0bf93aa5b054500a852704660
|
Urdu text classification
|
[
{
"docid": "17ec8f66fc6822520e2f22bd035c3ba0",
"text": "The paper discusses various phases in Urdu lexicon development from corpus. First the issues related with Urdu orthography such as optional vocalic content, Unicode variations, name recognition, spelling variation etc. have been described, then corpus acquisition, corpus cleaning, tokenization etc has been discussed and finally Urdu lexicon development i.e. POS tags, features, lemmas, phonemic transcription and the format of the lexicon has been discussed .",
"title": ""
},
{
"docid": "61662cfd286c06970243bc13d5eff566",
"text": "This paper develops a theoretical learning model of text classification for Support Vector Machines (SVMs). It connects the statistical properties of text-classification tasks with the generalization performance of a SVM in a quantitative way. Unlike conventional approaches to learning text classifiers, which rely primarily on empirical evidence, this model explains why and when SVMs perform well for text classification. In particular, it addresses the following questions: Why can support vector machines handle the large feature spaces in text classification effectively? How is this related to the statistical properties of text? What are sufficient conditions for applying SVMs to text-classification problems successfully?",
"title": ""
}
] |
[
{
"docid": "396dd0517369d892d249bb64fa410128",
"text": "Within the philosophy of language, pragmatics has tended to be seen as an adjunct to, and a means of solving problems in, semantics. A cognitive-scientific conception of pragmatics as a mental processing system responsible for interpreting ostensive communicative stimuli (specifically, verbal utterances) has effected a transformation in the pragmatic issues pursued and the kinds of explanation offered. Taking this latter perspective, I compare two distinct proposals on the kinds of processes, and the architecture of the system(s), responsible for the recovery of speaker meaning (both explicitly and implicitly communicated meaning). 1. Pragmatics as a Cognitive System 1.1 From Philosophy of Language to Cognitive Science Broadly speaking, there are two perspectives on pragmatics: the ‘philosophical’ and the ‘cognitive’. From the philosophical perspective, an interest in pragmatics has been largely motivated by problems and issues in semantics. A familiar instance of this was Grice’s concern to maintain a close semantic parallel between logical operators and their natural language counterparts, such as ‘not’, ‘and’, ‘or’, ‘if’, ‘every’, ‘a/some’, and ‘the’, in the face of what look like quite major divergences in the meaning of the linguistic elements (see Grice 1975, 1981). The explanation he provided was pragmatic, i.e. in terms of what occurs when the logical semantics of these terms is put to rational communicative use. Consider the case of ‘and’: (1) a. Mary went to a movie and Sam read a novel. b. She gave him her key and he opened the door. c. She insulted him and he left the room. While (a) seems to reflect the straightforward truth-functional symmetrical connection, (b) and (c) communicate a stronger asymmetric relation: temporal Many thanks to Richard Breheny, Sam Guttenplan, Corinne Iten, Deirdre Wilson and Vladimir Zegarac for helpful comments and support during the writing of this paper. Address for correspondence: Department of Phonetics & Linguistics, University College London, Gower Street, London WC1E 6BT, UK. Email: robyn linguistics.ucl.ac.uk Mind & Language, Vol. 17 Nos 1 and 2 February/April 2002, pp. 127–148. Blackwell Publishers Ltd. 2002, 108 Cowley Road, Oxford, OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA.",
"title": ""
},
{
"docid": "dcd21065898c9dd108617a3db8dad6a1",
"text": "Advanced driver assistance systems are the newest addition to vehicular technology. Such systems use a wide array of sensors to provide a superior driving experience. Vehicle safety and driver alert are important parts of these system. This paper proposes a driver alert system to prevent and mitigate adjacent vehicle collisions by proving warning information of on-road vehicles and possible collisions. A dynamic Bayesian network (DBN) is utilized to fuse multiple sensors to provide driver awareness. It detects oncoming adjacent vehicles and gathers ego vehicle motion characteristics using an on-board camera and inertial measurement unit (IMU). A histogram of oriented gradient feature based classifier is used to detect any adjacent vehicles. Vehicles front-rear end and side faces were considered in training the classifier. Ego vehicles heading, speed and acceleration are captured from the IMU and feed into the DBN. The network parameters were learned from data via expectation maximization(EM) algorithm. The DBN is designed to provide two type of warning to the driver, a cautionary warning and a brake alert for possible collision with other vehicles. Experiments were completed on multiple public databases, demonstrating successful warnings and brake alerts in most situations.",
"title": ""
},
{
"docid": "c366303728d2a8ee47fe4cbfe67dec24",
"text": "Terrestrial Gamma-ray Flashes (TGFs), discovered in 1994 by the Compton Gamma-Ray Observatory, are high-energy photon bursts originating in the Earth’s atmosphere in association with thunderstorms. In this paper, we demonstrate theoretically that, while TGFs pass through the atmosphere, the large quantities of energetic electrons knocked out by collisions between photons and air molecules generate excited species of neutral and ionized molecules, leading to a significant amount of optical emissions. These emissions represent a novel type of transient luminous events in the vicinity of the cloud tops. We show that this predicted phenomenon illuminates a region with a size notably larger than the TGF source and has detectable levels of brightness. Since the spectroscopic, morphological, and temporal features of this luminous event are closely related with TGFs, corresponding measurements would provide a novel perspective for investigation of TGFs, as well as lightning discharges that produce them.",
"title": ""
},
{
"docid": "d1e2948af948822746fcc03bc79d6d2a",
"text": "The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning. We present a general-purpose framework for distributed computing environments, CoCoA, that has an efficient communication scheme and is applicable to a wide variety of problems in machine learning and signal processing. We extend the framework to cover general non-strongly-convex regularizers, including L1-regularized problems like lasso, sparse logistic regression, and elastic net regularization, and show how earlier work can be derived as a special case. We provide convergence guarantees for the class of convex regularized loss minimization objectives, leveraging a novel approach in handling non-strongly-convex regularizers and non-smooth loss functions. The resulting framework has markedly improved performance over state-of-the-art methods, as we illustrate with an extensive set of experiments on real distributed datasets.",
"title": ""
},
{
"docid": "242a2f64fc103af641320c1efe338412",
"text": "The availability of data on digital traces is growing to unprecedented sizes, but inferring actionable knowledge from large-scale data is far from being trivial. This is especially important for computational finance, where digital traces of human behaviour offer a great potential to drive trading strategies. We contribute to this by providing a consistent approach that integrates various datasources in the design of algorithmic traders. This allows us to derive insights into the principles behind the profitability of our trading strategies. We illustrate our approach through the analysis of Bitcoin, a cryptocurrency known for its large price fluctuations. In our analysis, we include economic signals of volume and price of exchange for USD, adoption of the Bitcoin technology and transaction volume of Bitcoin. We add social signals related to information search, word of mouth volume, emotional valence and opinion polarization as expressed in tweets related to Bitcoin for more than 3 years. Our analysis reveals that increases in opinion polarization and exchange volume precede rising Bitcoin prices, and that emotional valence precedes opinion polarization and rising exchange volumes. We apply these insights to design algorithmic trading strategies for Bitcoin, reaching very high profits in less than a year. We verify this high profitability with robust statistical methods that take into account risk and trading costs, confirming the long-standing hypothesis that trading-based social media sentiment has the potential to yield positive returns on investment.",
"title": ""
},
{
"docid": "e86247471d4911cb84aa79911547045b",
"text": "Creating rich representations of environments requires integration of multiple sensing modalities with complementary characteristics such as range and imaging sensors. To precisely combine multisensory information, the rigid transformation between different sensor coordinate systems (i.e., extrinsic parameters) must be estimated. The majority of existing extrinsic calibration techniques require one or multiple planar calibration patterns (such as checkerboards) to be observed simultaneously from the range and imaging sensors. The main limitation of these approaches is that they require modifying the scene with artificial targets. In this paper, we present a novel algorithm for extrinsically calibrating a range sensor with respect to an image sensor with no requirement of external artificial targets. The proposed method exploits natural linear features in the scene to precisely determine the rigid transformation between the coordinate frames. First, a set of 3D lines (plane intersection and boundary line segments) are extracted from the point cloud, and a set of 2D line segments are extracted from the image. Correspondences between the 3D and 2D line segments are used as inputs to an optimization problem which requires jointly estimating the relative translation and rotation between the coordinate frames. The proposed method is not limited to any particular types or configurations of sensors. To demonstrate robustness, efficiency and generality of the presented algorithm, we include results using various sensor configurations.",
"title": ""
},
{
"docid": "4e2bfd87acf1287f36694634a6111b3f",
"text": "This paper presents a model for managing departure aircraft at the spot or gate on the airport surface. The model is applied over two time frames: long term (one hour in future) for collaborative decision making, and short term (immediate) for decisions regarding the release of aircraft. The purpose of the model is to provide the controller a schedule of spot or gate release times optimized for runway utilization. This model was tested in nominal and heavy surface traffic scenarios in a simulated environment, and results indicate average throughput improvement of 10% in high traffic scenarios even with up to two minutes of uncertainty in spot arrival times.",
"title": ""
},
{
"docid": "6f31beb59f3f410f5d44446a4b75247a",
"text": "An approach for estimating direction-of-arrival (DoA) based on power output cross-correlation and antenna pattern diversity is proposed for a reactively steerable antenna. An \"estimator condition\" is proposed, from which the most appropriate pattern shape is derived. Computer simulations with directive beam patterns obtained from an electronically steerable parasitic array radiator antenna model are conducted to illustrate the theory and to inspect the method performance with respect to the \"estimator condition\". The simulation results confirm that a good estimation can be expected when suitable directive patterns are chosen. In addition, to verify performance, experiments on estimating DoA are conducted in an anechoic chamber for several angles of arrival and different scenarios of antenna adjustable reactance values. The results show that the proposed method can provide high-precision DoA estimation.",
"title": ""
},
{
"docid": "57602f5e2f64514926ab96551f2b4fb6",
"text": "Landscape genetics has seen rapid growth in number of publications since the term was coined in 2003. An extensive literature search from 1998 to 2008 using keywords associated with landscape genetics yielded 655 articles encompassing a vast array of study organisms, study designs and methodology. These publications were screened to identify 174 studies that explicitly incorporated at least one landscape variable with genetic data. We systematically reviewed this set of papers to assess taxonomic and temporal trends in: (i) geographic regions studied; (ii) types of questions addressed; (iii) molecular markers used; (iv) statistical analyses used; and (v) types and nature of spatial data used. Overall, studies have occurred in geographic regions proximal to developed countries and more commonly in terrestrial vs. aquatic habitats. Questions most often focused on effects of barriers and/or landscape variables on gene flow. The most commonly used molecular markers were microsatellites and amplified fragment length polymorphism (AFLPs), with AFLPs used more frequently in plants than animals. Analysis methods were dominated by Mantel and assignment tests. We also assessed differences among journals to evaluate the uniformity of reporting and publication standards. Few studies presented an explicit study design or explicit descriptions of spatial extent. While some landscape variables such as topographic relief affected most species studied, effects were not universal, and some species appeared unaffected by the landscape. Effects of habitat fragmentation were mixed, with some species altering movement paths and others unaffected. Taken together, although some generalities emerged regarding effects of specific landscape variables, results varied, thereby reinforcing the need for species-specific work. We conclude by: highlighting gaps in knowledge and methodology, providing guidelines to authors and reviewers of landscape genetics studies, and suggesting promising future directions of inquiry.",
"title": ""
},
{
"docid": "2c19e34ba53e7eb8631d979c83ee3e55",
"text": "This paper is the first attempt to learn the policy of an inquiry dialog system (IDS) by using deep reinforcement learning (DRL). Most IDS frameworks represent dialog states and dialog acts with logical formulae. In order to make learning inquiry dialog policies more effective, we introduce a logical formula embedding framework based on a recursive neural network. The results of experiments to evaluate the effect of 1) the DRL and 2) the logical formula embedding framework show that the combination of the two are as effective or even better than existing rule-based methods for inquiry dialog policies.",
"title": ""
},
{
"docid": "39188ae46f22dd183f356ba78528b720",
"text": "Systemic risk is a key concern for central banks charged with safeguarding overall financial stability. In this paper we investigate how systemic risk is affected by the structure of the financial system. We construct banking systems that are composed of a number of banks that are connected by interbank linkages. We then vary the key parameters that define the structure of the financial system — including its level of capitalisation, the degree to which banks are connected, the size of interbank exposures and the degree of concentration of the system — and analyse the influence of these parameters on the likelihood of contagious (knock-on) defaults. First, we find that the better capitalised banks are, the more resilient is the banking system against contagious defaults and this effect is non-linear. Second, the effect of the degree of connectivity is non-monotonic, that is, initially a small increase in connectivity increases the contagion effect; but after a certain threshold value, connectivity improves the ability of a banking system to absorb shocks. Third, the size of interbank liabilities tends to increase the risk of knock-on default, even if banks hold capital against such exposures. Fourth, more concentrated banking systems are shown to be prone to larger systemic risk, all else equal. In an extension to the main analysis we study how liquidity effects interact with banking structure to produce a greater chance of systemic breakdown. We finally consider how the risk of contagion might depend on the degree of asymmetry (tiering) inherent in the structure of the banking system. A number of our results have important implications for public policy, which this paper also draws out.",
"title": ""
},
{
"docid": "3a0d2784b1115e82a4aedad074da8c74",
"text": "The aim of this paper is to present how to implement a control volume approach improved by Hermite radial basis functions (CV-RBF) for geochemical problems. A multi-step strategy based on Richardson extrapolation is proposed as an alternative to the conventional dual step sequential non-iterative approach (SNIA) for coupling the transport equations with the chemical model. Additionally, this paper illustrates how to use PHREEQC to add geochemical reaction capabilities to CV-RBF transport methods. Several problems with different degrees of complexity were solved including cases of cation exchange, dissolution, dissociation, equilibrium and kinetics at different rates for mineral species. The results show that the solution and strategies presented here are effective and in good agreement with other methods presented in the literature for the same cases. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "154ab0cbc1dfa3c4bae8a846f800699e",
"text": "This paper presents a new strategy for the active disturbance rejection control (ADRC) of a general uncertain system with unknown bounded disturbance based on a nonlinear sliding mode extended state observer (SMESO). Firstly, a nonlinear extended state observer is synthesized using sliding mode technique for a general uncertain system assuming asymptotic stability. Then the convergence characteristics of the estimation error are analyzed by Lyapunov strategy. It revealed that the proposed SMESO is asymptotically stable and accurately estimates the states of the system in addition to estimating the total disturbance. Then, an ADRC is implemented by using a nonlinear state error feedback (NLSEF) controller; that is suggested by J. Han and the proposed SMESO to control and actively reject the total disturbance of a permanent magnet DC (PMDC) motor. These disturbances caused by the unknown exogenous disturbances and the matched uncertainties of the controlled model. The proposed SMESO is compared with the linear extended state observer (LESO). Through digital simulations using MATLAB / SIMULINK, the chattering phenomenon has been reduced dramatically on the control input channel compared to LESO. Finally, the closed-loop system exhibits a high immunity to torque disturbance and quite robustness to matched uncertainties in the system. Keywords—extended state observer; sliding mode; rejection control; tracking differentiator; DC motor; nonlinear state feedback",
"title": ""
},
{
"docid": "055faaaa14959a204ca19a4962f6e822",
"text": "Data mining (also known as knowledge discovery from databases) is the process of extraction of hidden, previously unknown and potentially useful information from databases. The outcome of the extracted data can be analyzed for the future planning and development perspectives. In this paper, we have made an attempt to demonstrate how one can extract the local (district) level census, socio-economic and population related other data for knowledge discovery and their analysis using the powerful data mining tool Weka. I. DATA MINING Data mining has been defined as the nontrivial extraction of implicit, previously unknown, and potentially useful information from databases/data warehouses. It uses machine learning, statistical and visualization techniques to discover and present knowledge in a form, which is easily comprehensive to humans [1]. Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help user focus on the most important information in their data warehouses. Data mining tools predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by retrospective tools typical of decision support systems. Data mining tools can answer business questions that traditionally were too time consuming to resolve. They scour databases for hidden patterns, finding predictive information that experts may miss because it lies outside their expectations. Data mining techniques can be implemented rapidly on existing software and hardware platforms to enhance the value of existing information resources, and can be integrated with new products and systems as they are brought on-line [2]. Data mining steps in the knowledge discovery process are as follows: 1. Data cleaningThe removal of noise and inconsistent data. 2. Data integration The combination of multiple sources of data. 3. Data selection The data relevant for analysis is retrieved from the database. 4. Data transformation The consolidation and transformation of data into forms appropriate for mining. 5. Data mining The use of intelligent methods to extract patterns from data. 6. Pattern evaluation Identification of patterns that are interesting. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 36 7. Knowledge presentation Visualization and knowledge representation techniques are used to present the extracted or mined knowledge to the end user [3]. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps [7][8]. II. WEKA: Weka (Waikato Environment for Knowledge Analysis) is a popular suite of machine learning software written in Java, developed at the University of Waikato, New Zealand. Weka is free software available under the GNU General Public License. The Weka workbench contains a collection of visualization tools and algorithms for data analysis and predictive modeling, together with graphical user interfaces for easy access to this functionality [4]. Weka is a collection of machine learning algorithms for solving real-world data mining problems. It is written in Java and runs on almost any platform. The algorithms can either be applied directly to a dataset or called from your own Java code [5]. The original non-Java version of Weka was a TCL/TK front-end to (mostly third-party) modeling algorithms implemented in other programming languages, plus data preprocessing utilities in C, and a Makefile-based system for running machine learning experiments. This original version was primarily designed as a tool for analyzing data from agricultural domains, but the more recent fully Java-based version (Weka 3), for which development started in 1997, is now used in many different application areas, in particular for educational purposes and research. Advantages of Weka include: I. Free availability under the GNU General Public License II. Portability, since it is fully implemented in the Java programming language and thus runs on almost any modern computing platform III. A comprehensive collection of data preprocessing and modeling techniques IV. Ease of use due to its graphical user interfaces Weka supports several standard data mining tasks, more specifically, data preprocessing, clustering, classification, regression, visualization, and feature selection [10]. All of Weka's techniques are predicated on the assumption that the data is available as a single flat file or relation, where each data point is described by a fixed number of attributes (normally, numeric or nominal attributes, but some other attribute types are also supported). Weka provides access to SQL databases using Java Database Connectivity and can process the result returned by a database query. It is not capable of multi-relational data mining, but there is separate software for converting a collection of linked database tables into a single table that is suitable for (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 37 processing using Weka. Another important area that is currently not covered by the algorithms included in the Weka distribution is sequence modeling [4]. III. DATA PROCESSING, METHODOLOGY AND RESULTS The primary available data such as census (2001), socio-economic data, and few basic information of Latur district are collected from National Informatics Centre (NIC), Latur, which is mainly required to design and develop the database for Latur district of Maharashtra state of India. The database is designed in MS-Access 2003 database management system to store the collected data. The data is formed according to the required format and structures. Further, the data is converted to ARFF (Attribute Relation File Format) format to process in WEKA. An ARFF file is an ASCII text file that describes a list of instances sharing a set of attributes. ARFF files were developed by the Machine Learning Project at the Department of Computer Science of The University of Waikato for use with the Weka machine learning software. This document descibes the version of ARFF used with Weka versions 3.2 to 3.3; this is an extension of the ARFF format as described in the data mining book written by Ian H. Witten and Eibe Frank [6][9]. After processing the ARFF file in WEKA the list of all attributes, statistics and other parameters can be utilized as shown in Figure 1. Fig.1 Processed ARFF file in WEKA. In the above shown file, there are 729 villages data is processed with different attributes (25) like population, health, literacy, village locations etc. Among all these, few of them are preprocessed attributes generated by census data like, percent_male_literacy, total_percent_literacy, total_percent_illiteracy, sex_ratio etc. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 38 The processed data in Weka can be analyzed using different data mining techniques like, Classification, Clustering, Association rule mining, Visualization etc. algorithms. The Figure 2 shows the few processed attributes which are visualized into a 2 dimensional graphical representation. Fig. 2 Graphical visualization of processed attributes. The information can be extracted with respect to two or more associative relation of data set. In this process, we have made an attempt to visualize the impact of male and female literacy on the gender inequality. The literacy related and population data is processed and computed the percent wise male and female literacy. Accordingly we have computed the sex ratio attribute from the given male and female population data. The new attributes like, male_percent_literacy, female_percent_literacy and sex_ratio are compared each other to extract the impact of literacy on gender inequality. The Figure 3 and Figure 4 are the extracted results of sex ratio values with male and female literacy. Fig. 3 Female literacy and Sex ratio values. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 39 Fig. 4 Male literacy and Sex ratio values. On the Y-axis, the female percent literacy values are shown in Figure 3, and the male percent literacy values are shown in Figure 4. By considering both the results, the female percent literacy is poor than the male percent literacy in the district. The sex ratio values are higher in male percent literacy than the female percent literacy. The results are purely showing that the literacy is very much important to manage the gender inequality of any region. ACKNOWLEDGEMENT: Authors are grateful to the department of NIC, Latur for providing all the basic data and WEKA for providing such a strong tool to extract and analyze knowledge from database. CONCLUSION Knowledge extraction from database is becom",
"title": ""
},
{
"docid": "2757d2ab9c3fbc2eb01385771f297a71",
"text": "In this brief, we propose a variable structure based nonlinear missile guidance/autopilot system with highly maneuverable actuators, mainly consisting of thrust vector control and divert control system, for the task of intercepting of a theater ballistic missile. The aim of the present work is to achieve bounded target interception under the mentioned 5 degree-of-freedom (DOF) control such that the distance between the missile and the target will enter the range of triggering the missile's explosion. First, a 3-DOF sliding-mode guidance law of the missile considering external disturbances and zero-effort-miss (ZEM) is designed to minimize the distance between the center of the missile and that of the target. Next, a quaternion-based sliding-mode attitude controller is developed to track the attitude command while coping with variation of missile's inertia and uncertain aerodynamic force/wind gusts. The stability of the overall system and ZEM-phase convergence are analyzed thoroughly via Lyapunov stability theory. Extensive simulation results are obtained to validate the effectiveness of the proposed integrated guidance/autopilot system by use of the 5-DOF inputs.",
"title": ""
},
{
"docid": "bfcd962b099e6e751125ac43646d76cc",
"text": "Dear Editor: We read carefully and with great interest the anatomic study performed by Lilyquist et al. They performed an interesting study of the tibiofibular syndesmosis using a 3-dimensional method that can be of help when performing anatomic studies. As the authors report in the study, a controversy exists regarding the anatomic structures of the syndesmosis, and a huge confusion can be observed when reading the related literature. However, anatomic confusion between the inferior transverse ligament and the intermalleolar ligament is present in the manuscript: the intermalleolar ligament is erroneously identified as the “inferior” transverse ligament. The transverse ligament is the name that receives the deep component of the posterior tibiofibular ligament. The posterior tibiofibular ligament is a ligament located in the posterior aspect of the ankle that joins the distal epiphysis of tibia and fibula; it is formed by 2 fascicles, one superficial and one deep. The deep fascicle or transverse ligament is difficult to see from a posterior ankle view, but easily from a plantar view of the tibiofibular syndesmosis (Figure 1). Instead, the intermalleolar ligament is a thickening of the posterior ankle joint capsule, located between the posterior talofibular ligament and the transverse ligament. It originates from the medial facet of the lateral malleolus and directs medially to tibia and talus (Figure 2). The intermalleolar ligament was observed in 100% of the specimens by Golanó et al in contrast with 70% in Lilyquist’s study. On the other hand, structures of the ankle syndesmosis have not been named according to the International Anatomical Terminology (IAT). In 1955, the VI Federative International Congress of Anatomy accorded to eliminate eponyms from the IAT. Because of this measure, the Chaput, Wagstaff, or Volkman tubercles used in the manuscript should be eliminated in order to avoid increasing confusion. Lilyquist et al also defined the tibiofibular syndesmosis as being formed by the anterior inferior tibiofibular ligament, the posterior inferior tibiofibular ligament, the interosseous ligament, and the inferior transverse ligament. The anterior inferior tibiofibular ligament and posterior inferior tibiofibular ligament of the tibiofibular syndesmosis (or inferior tibiofibular joint) should be referred to as the anterior tibiofibular ligament and posterior tibiofibular ligament. The reason why it is not necessary to use “inferior” in its description is that the ligaments of the superior tibiofibular joint are the anterior ligament of the fibular head and the posterior ligament of the fibular head, not the “anterior superior tibiofibular ligament” and “posterior superior tibiofibular ligament.” The ankle syndesmosis is one of the areas of the human body where chronic anatomic errors exist: the transverse ligament (deep component of the posterior tibiofibular ligament), the anterior tibiofibular ligament (“anterior 689614 FAIXXX10.1177/1071100716689614Foot & Ankle InternationalLetter to the Editor letter2017",
"title": ""
},
{
"docid": "3564cf609cf1b9666eaff7edcd12a540",
"text": "Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.",
"title": ""
},
{
"docid": "06f27036cd261647c7670bdf854f5fb4",
"text": "OBJECTIVE\nTo determine the formation and dissolution of calcium fluoride on the enamel surface after application of two fluoride gel-saliva mixtures.\n\n\nMETHOD AND MATERIALS\nFrom each of 80 bovine incisors, two enamel specimens were prepared and subjected to two different treatment procedures. In group 1, 80 specimens were treated with a mixture of an amine fluoride gel (1.25% F-; pH 5.2; 5 minutes) and human saliva. In group 2, 80 enamel blocks were subjected to a mixture of sodium fluoride gel (1.25% F; pH 5.5; 5 minutes) and human saliva. Subsequent to fluoride treatment, 40 specimens from each group were stored in human saliva and sterile water, respectively. Ten specimens were removed after each of 1 hour, 24 hours, 2 days, and 5 days and analyzed according to potassium hydroxide-soluble fluoride.\n\n\nRESULTS\nApplication of amine fluoride gel resulted in a higher amount of potassium hydroxide-soluble fluoride than did sodium fluoride gel 1 hour after application. Saliva exerted an inhibitory effect according to the dissolution rate of calcium fluoride. However, after 5 days, more than 90% of the precipitated calcium fluoride was dissolved in the amine fluoride group, and almost all potassium hydroxide-soluble fluoride was lost in the sodium fluoride group. Calcium fluoride apparently dissolves rapidly, even at almost neutral pH.\n\n\nCONCLUSION\nConsidering the limitations of an in vitro study, it is concluded that highly concentrated fluoride gels should be applied at an adequate frequency to reestablish a calcium fluoride-like layer.",
"title": ""
}
] |
scidocsrr
|
c99c84bf59c33895d74c2c5fa30f9650
|
Why and how Java developers break APIs
|
[
{
"docid": "fd22861fbb2661a135f9a421d621ba35",
"text": "When APIs evolve, clients make corresponding changes to their applications to utilize new or updated APIs. Despite the benefits of new or updated APIs, developers are often slow to adopt the new APIs. As a first step toward understanding the impact of API evolution on software ecosystems, we conduct an in-depth case study of the co-evolution behavior of Android API and dependent applications using the version history data found in github. Our study confirms that Android is evolving fast at a rate of 115 API updates per month on average. Client adoption, however, is not catching up with the pace of API evolution. About 28% of API references in client applications are outdated with a median lagging time of 16 months. 22% of outdated API usages eventually upgrade to use newer API versions, but the propagation time is about 14 months, much slower than the average API release interval (3 months). Fast evolving APIs are used more by clients than slow evolving APIs but the average time taken to adopt new versions is longer for fast evolving APIs. Further, API usage adaptation code is more defect prone than the one without API usage adaptation. This may indicate that developers avoid API instability.",
"title": ""
}
] |
[
{
"docid": "08565176f7a68c27f20756e468663b47",
"text": "Speech processing is emerged as one of the important application area of digital signal processing. Various fields for research in speech processing are speech recognition, speaker recognition, speech synthesis, speech coding etc. The objective of automatic speaker recognition is to extract, characterize and recognize the information about speaker identity. Feature extraction is the first step for speaker recognition. Many algorithms are suggested/developed by the researchers for feature extraction. In this work, the Mel Frequency Cepstrum Coefficient (MFCC) feature has been used for designing a text dependent speaker identification system. Some modifications to the existing technique of MFCC for feature extraction are also suggested to improve the speaker recognition efficiency.",
"title": ""
},
{
"docid": "e3027bdccdb21de2cc395af603675702",
"text": "Extraction of the lower third molar is one of the most common procedures performed in oral surgery. In general, impacted tooth extraction involves sectioning the tooth’s crown and roots. In order to divide the impacted tooth so that it can be extracted, high-speed air turbine drills are frequently used. However, complications related to air turbine drills may occur. In this report, we propose an alternative tooth sectioning method that obviates the need for air turbine drill use by using a low-speed straight handpiece and carbide bur. A 21-year-old female patient presented to the institute’s dental hospital complaining of symptoms localized to the left lower third molar tooth that were suggestive of impaction. After physical examination, tooth extraction of the impacted left lower third molar was proposed and the patient consented to the procedure. The crown was divided using a conventional straight low-speed handpiece and carbide bur. This carbide bur can easily cut through the enamel of crown. On post-operative day number five, suture was removed and the wound was extremely clear. This technique could minimise intra-operative time and reduce the morbidity associated with air turbine drill assisted lower third molar extraction.",
"title": ""
},
{
"docid": "2794ea63eb1a24ebd1cea052345569eb",
"text": "Ethernet is considered as a future communication standard for distributed embedded systems in the automotive and industrial domains. A key challenge is the deterministic low-latency transport of Ethernet frames, as many safety-critical real-time applications in these domains have tight timing requirements. Time-sensitive networking (TSN) is an upcoming set of Ethernet standards, which (among other things) address these requirements by specifying new quality of service mechanisms in the form of different traffic shapers. In this paper, we consider TSN's time-aware and peristaltic shapers and evaluate whether these shapers are able to fulfill these strict timing requirements. We present a formal timing analysis, which is a key requirement for the adoption of Ethernet in safety-critical real-time systems, to derive worst-case latency bounds for each shaper. We use a realistic automotive Ethernet setup to compare these shapers to each other and against Ethernet following IEEE 802.1Q.",
"title": ""
},
{
"docid": "d7e8a55c9d1ad24a82ea25a27ac08076",
"text": "We present online learning techniques for statistical machine translation (SMT). The availability of large training data sets that grow constantly over time is becoming more and more frequent in the field of SMT—for example, in the context of translation agencies or the daily translation of government proceedings. When new knowledge is to be incorporated in the SMT models, the use of batch learning techniques require very time-consuming estimation processes over the whole training set that may take days or weeks to be executed. By means of the application of online learning, new training samples can be processed individually in real time. For this purpose, we define a state-of-the-art SMT model composed of a set of submodels, as well as a set of incremental update rules for each of these submodels. To test our techniques, we have studied two well-known SMT applications that can be used in translation agencies: post-editing and interactive machine translation. In both scenarios, the SMT system collaborates with the user to generate high-quality translations. These user-validated translations can be used to extend the SMT models by means of online learning. Empirical results in the two scenarios under consideration show the great impact of frequent updates in the system performance. The time cost of such updates was also measured, comparing the efficiency of a batch learning SMT system with that of an online learning system, showing that online learning is able to work in real time whereas the time cost of batch retraining soon becomes infeasible. Empirical results also showed that the performance of online learning is comparable to that of batch learning. Moreover, the proposed techniques were able to learn from previously estimated models or from scratch. We also propose two new measures to predict the effectiveness of online learning in SMT tasks. The translation system with online learning capabilities presented here is implemented in the open-source Thot toolkit for SMT.",
"title": ""
},
{
"docid": "4b78f107ee628cefaeb80296e4f9ae27",
"text": "On shared-memory systems, Cilk-style work-stealing has been used to effectively parallelize irregular task-graph based applications such as Unbalanced Tree Search (UTS). There are two main difficulties in extending this approach to distributed memory. In the shared memory approach, thieves (nodes without work) constantly attempt to asynchronously steal work from randomly chosen victims until they find work. In distributed memory, thieves cannot autonomously steal work from a victim without disrupting its execution. When work is sparse, this results in performance degradation. In essence, a direct extension of traditional work-stealing to distributed memory violates the work-first principle underlying work-stealing. Further, thieves spend useless CPU cycles attacking victims that have no work, resulting in system inefficiencies in multi-programmed contexts. Second, it is non-trivial to detect active distributed termination (detect that programs at all nodes are looking for work, hence there is no work). This problem is well-studied and requires careful design for good performance. Unfortunately, in most existing languages/frameworks, application developers are forced to implement their own distributed termination detection.\n In this paper, we develop a simple set of ideas that allow work-stealing to be efficiently extended to distributed memory. First, we introduce lifeline graphs: low-degree, low-diameter, fully connected directed graphs. Such graphs can be constructed from k-dimensional hypercubes. When a node is unable to find work after w unsuccessful steals, it quiesces after informing the outgoing edges in its lifeline graph. Quiescent nodes do not disturb other nodes. A quiesced node is reactivated when work arrives from a lifeline and itself shares this work with those of its incoming lifelines that are activated. Termination occurs precisely when computation at all nodes has quiesced. In a language such as X10, such passive distributed termination can be detected automatically using the finish construct -- no application code is necessary.\n Our design is implemented in a few hundred lines of X10. On the binomial tree described in olivier:08}, the program achieve 87% efficiency on an Infiniband cluster of 1024 Power7 cores, with a peak throughput of 2.37 GNodes/sec. It achieves 87% efficiency on a Blue Gene/P with 2048 processors, and a peak throughput of 0.966 GNodes/s. All numbers are relative to single core sequential performance. This implementation has been refactored into a reusable global load balancing framework. Applications can use this framework to obtain global load balance with minimal code changes.\n In summary, we claim: (a) the first formulation of UTS that does not involve application level global termination detection, (b) the introduction of lifeline graphs to reduce failed steals (c) the demonstration of simple lifeline graphs based on k-hypercubes, (d) performance with superior efficiency (or the same efficiency but over a wider range) than published results on UTS. In particular, our framework can deliver the same or better performance as an unrestricted random work-stealing implementation, while reducing the number of attempted steals.",
"title": ""
},
{
"docid": "1c5cba8f3533880b19e9ef98a296ef57",
"text": "Internal organs are hidden and untouchable, making it difficult for children to learn their size, position, and function. Traditionally, human anatomy (body form) and physiology (body function) are taught using techniques ranging from worksheets to three-dimensional models. We present a new approach called BodyVis, an e-textile shirt that combines biometric sensing and wearable visualizations to reveal otherwise invisible body parts and functions. We describe our 15-month iterative design process including lessons learned through the development of three prototypes using participatory design and two evaluations of the final prototype: a design probe interview with seven elementary school teachers and three single-session deployments in after-school programs. Our findings have implications for the growing area of wearables and tangibles for learning.",
"title": ""
},
{
"docid": "56110c3d5b88b118ad98bfd077f00221",
"text": "Advances in mobile robotics have enabled robots that can autonomously operate in human-populated environments. Although primary tasks for such robots might be fetching, delivery, or escorting, they present an untapped potential as information gathering agents that can answer questions for the community of co-inhabitants. In this paper, we seek to better understand requirements for such information gathering robots (InfoBots) from the perspective of the user requesting the information. We present findings from two studies: (i) a user survey conducted in two office buildings and (ii) a 4-day long deployment in one of the buildings, during which inhabitants of the building could ask questions to an InfoBot through a web-based interface. These studies allow us to characterize the types of information that InfoBots can provide for their users.",
"title": ""
},
{
"docid": "de5c439731485929416b0e729f7f79b2",
"text": "The feedback dynamics from mosquito to human and back to mosquito involve considerable time delays due to the incubation periods of the parasites. In this paper, taking explicit account of the incubation periods of parasites within the human and the mosquito, we first propose a delayed Ross-Macdonald model. Then we calculate the basic reproduction number R0 and carry out some sensitivity analysis of R0 on the incubation periods, that is, to study the effect of time delays on the basic reproduction number. It is shown that the basic reproduction number is a decreasing function of both time delays. Thus, prolonging the incubation periods in either humans or mosquitos (via medicine or control measures) could reduce the prevalence of infection.",
"title": ""
},
{
"docid": "247eced239dfd8c1631d80a592593471",
"text": "In this paper, we propose new algorithms for learning segmentation strategies for simultaneous speech translation. In contrast to previously proposed heuristic methods, our method finds a segmentation that directly maximizes the performance of the machine translation system. We describe two methods based on greedy search and dynamic programming that search for the optimal segmentation strategy. An experimental evaluation finds that our algorithm is able to segment the input two to three times more frequently than conventional methods in terms of number of words, while maintaining the same score of automatic evaluation.1",
"title": ""
},
{
"docid": "8bb465b2ec1f751b235992a79c6f7bf1",
"text": "Continuum robotics has rapidly become a rich and diverse area of research, with many designs and applications demonstrated. Despite this diversity in form and purpose, there exists remarkable similarity in the fundamental simplified kinematic models that have been applied to continuum robots. However, this can easily be obscured, especially to a newcomer to the field, by the different applications, coordinate frame choices, and analytical formalisms employed. In this paper we review several modeling approaches in a common frame and notational convention, illustrating that for piecewise constant curvature, they produce identical results. This discussion elucidates what has been articulated in different ways by a number of researchers in the past several years, namely that constant-curvature kinematics can be considered as consisting of two separate submappings: one that is general and applies to all continuum robots, and another that is robot-specific. These mappings are then developed both for the singlesection and for the multi-section case. Similarly, we discuss the decomposition of differential kinematics (the robot’s Jacobian) into robot-specific and robot-independent portions. The paper concludes with a perspective on several of the themes of current research that are shaping the future of continuum robotics.",
"title": ""
},
{
"docid": "5f17432d235a991a5544ad794875a919",
"text": "We consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are not known exactly. Partially observable Markov decision processes (POMDPs) provide a rich mathematical model to handle such environments but require a known model to be solved by most approaches. This is a limitation in practice as the exact model parameters are often difficult to specify exactly. We adopt a Bayesian approach where a posterior distribution over the model parameters is maintained and updated through experience with the environment. We propose a particle filter algorithm to maintain the posterior distribution and an online planning algorithm, based on trajectory sampling, to plan the best action to perform under the current posterior. The resulting approach selects control actions which optimally trade-off between 1) exploring the environment to learn the model, 2) identifying the system's state, and 3) exploiting its knowledge in order to maximize long-term rewards. Our preliminary results on a simulated robot navigation problem show that our approach is able to learn good models of the sensors and actuators, and performs as well as if it had the true model.",
"title": ""
},
{
"docid": "426826d9ede3c0146840e4ec9190e680",
"text": "We propose methods to classify lines of military chat, or posts, which contain items of interest. We evaluated several current text categorization and feature selection methodologies on chat posts. Our chat posts are examples of 'micro-text', or text that is generally very short in length, semi-structured, and characterized by unstructured or informal grammar and language. Although this study focused specifically on tactical updates via chat, we believe the findings are applicable to content of a similar linguistic structure. Completion of this milestone is a significant first step in allowing for more complex categorization and information extraction.",
"title": ""
},
{
"docid": "b27b164a7ff43b8f360167e5f886f18a",
"text": "Segmentation and grouping of image elements is required to proceed with image recognition. Due to the fact that the images are two dimensional (2D) representations of the real three dimensional (3D) scenes, the information of the third dimension, like geometrical relations between the objects that are important for reasonable segmentation and grouping, are lost in 2D image representations. Computer stereo vision implies on understanding information stored in 3D-scene. Techniques for stereo computation are observed in this paper. The methods for solving the correspondence problem in stereo image matching are presented. The process of 3D-scene reconstruction from stereo image pairs and extraction of parameters important for image understanding are described. Occluded and surrounding areas in stereo image pairs are stressed out as important for image understanding.",
"title": ""
},
{
"docid": "3bb48e5bf7cc87d635ab4958553ef153",
"text": "This paper presents an in-depth study of young Swedish consumers and their impulsive online buying behaviour for clothing. The aim of the study is to develop the understanding of what factors affect impulse buying of clothing online and what feelings emerge when buying online. The study carried out was exploratory in nature, aiming to develop an understanding of impulse buying behaviour online before, under and after the actual purchase. The empirical data was collected through personal interviews. In the study, a pattern of the consumers recurrent feelings are identified through the impulse buying process; escapism, pleasure, reward, scarcity, security and anticipation. The escapism is particularly occurring since the study revealed that the consumers often carried out impulse purchases when they initially were bored, as opposed to previous studies. 1 University of Borås, Swedish Institute for Innovative Retailing, School of Business and IT, Allégatan 1, S-501 90 Borås, Sweden. Phone: +46732305934 Mail: [email protected]",
"title": ""
},
{
"docid": "764e5c5201217be1aa9e24ce4fa3760a",
"text": "Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Please do not copy or distribute without explicit permission of the authors. Abstract Customer defection or churn is a widespread phenomenon that threatens firms across a variety of industries with dramatic financial consequences. To tackle this problem, companies are developing sophisticated churn management strategies. These strategies typically involve two steps – ranking customers based on their estimated propensity to churn, and then offering retention incentives to a subset of customers at the top of the churn ranking. The implicit assumption is that this process would maximize firm's profits by targeting customers who are most likely to churn. However, current marketing research and practice aims at maximizing the correct classification of churners and non-churners. Profit from targeting a customer depends on not only a customer's propensity to churn, but also on her spend or value, her probability of responding to retention offers, as well as the cost of these offers. Overall profit of the firm also depends on the number of customers the firm decides to target for its retention campaign. We propose a predictive model that accounts for all these elements. Our optimization algorithm uses stochastic gradient boosting, a state-of-the-art numerical algorithm based on stage-wise gradient descent. It also determines the optimal number of customers to target. The resulting optimal customer ranking and target size selection leads to, on average, a 115% improvement in profit compared to current methods. Remarkably, the improvement in profit comes along with more prediction errors in terms of which customers will churn. However, the new loss function leads to better predictions where it matters the most for the company's profits. For a company like Verizon Wireless, this translates into a profit increase of at least $28 million from a single retention campaign, without any additional implementation cost.",
"title": ""
},
{
"docid": "1164e5b54ce970b55cf65cca0a1fbcb1",
"text": "We present a technique for automatic placement of authorization hooks, and apply it to the Linux security modules (LSM) framework. LSM is a generic framework which allows diverse authorization policies to be enforced by the Linux kernel. It consists of a kernel module which encapsulates an authorization policy, and hooks into the kernel module placed at appropriate locations in the Linux kernel. The kernel enforces the authorization policy using hook calls. In current practice, hooks are placed manually in the kernel. This approach is tedious, and as prior work has shown, is prone to security holes.Our technique uses static analysis of the Linux kernel and the kernel module to automate hook placement. Given a non-hook-placed version of the Linux kernel, and a kernel module that implements an authorization policy, our technique infers the set of operations authorized by each hook, and the set of operations performed by each function in the kernel. It uses this information to infer the set of hooks that must guard each kernel function. We describe the design and implementation of a prototype tool called TAHOE (Tool for Authorization Hook Placement) that uses this technique. We demonstrate the effectiveness of TAHOE by using it with the LSM implementation of security-enhanced Linux (selinux). While our exposition in this paper focuses on hook placement for LSM, our technique can be used to place hooks in other LSM-like architectures as well.",
"title": ""
},
{
"docid": "4822a22e8fde11bf99eb67f96a8d2443",
"text": "The traditional approach towards human identification such as fingerprints, identity cards, iris recognition etc. lead to the improvised technique for face recognition. This includes enhancement and segmentation of face image, detection of face boundary and facial features, matching of extracted features against the features in a database, and finally recognition of the face. This research proposes a wavelet transformation for preprocessing the face image, extracting edge image, extracting features and finally matching extracted facial features for face recognition. Simulation is done using ORL database that contains PGM images. This research finds application in homeland security where it can increase the robustness of the existing face recognition algorithms.",
"title": ""
},
{
"docid": "171ded161c7d61cfaf4663ba080b0c6a",
"text": "Digital advertisements are delivered in the form of static images, animations or videos, with the goal to promote a product, a service or an idea to desktop or mobile users. Thus, the advertiser pays a monetary cost to buy ad-space in a content provider’s medium (e.g., website) to place their advertisement in the consumer’s display. However, is it only the advertiser who pays for the ad delivery? Unlike traditional advertisements in mediums such as newspapers, TV or radio, in the digital world, the end-users are also paying a cost for the advertisement delivery. Whilst the cost on the advertiser’s side is clearly monetary, on the end-user, it includes both quantifiable costs, such as network requests and transferred bytes, and qualitative costs such as privacy loss to the ad ecosystem. In this study, we aim to increase user awareness regarding the hidden costs of digital advertisement in mobile devices, and compare the user and advertiser views. Specifically, we built OpenDAMP, a transparency tool that passively analyzes users’ web traffic and estimates the costs in both sides. We use a year-long dataset of 1270 real mobile users and by juxtaposing the costs of both sides, we identify a clear imbalance: the advertisers pay several times less to deliver ads, than the cost paid by the users to download them. In addition, the majority of users experience a significant privacy loss, through the personalized ad delivery mechanics.",
"title": ""
},
{
"docid": "3ff58e78ac9fe623e53743ad05248a30",
"text": "Clock gating is an effective technique for minimizing dynamic power in sequential circuits. Applying clock-gating at gate-level not only saves time compared to implementing clock-gating in the RTL code but also saves power and can easily be automated in the synthesis process. This paper presents simulation results on various types of clock-gating at different hierarchical levels on a serial peripheral interface (SPI) design. In general power savings of about 30% and 36% reduction on toggle rate can be seen with different complex clock- gating methods with respect to no clock-gating in the design.",
"title": ""
},
{
"docid": "e57131739db1ed904cb0032dddd67804",
"text": "We present a cross-layer modeling and design approach for multigigabit indoor wireless personal area networks (WPANs) utilizing the unlicensed millimeter (mm) wave spectrum in the 60 GHz band. Our approach accounts for the following two characteristics that sharply distinguish mm wave networking from that at lower carrier frequencies. First, mm wave links are inherently directional: directivity is required to overcome the higher path loss at smaller wavelengths, and it is feasible with compact, low-cost circuit board antenna arrays. Second, indoor mm wave links are highly susceptible to blockage because of the limited ability to diffract around obstacles such as the human body and furniture. We develop a diffraction-based model to determine network link connectivity as a function of the locations of stationary and moving obstacles. For a centralized WPAN controlled by an access point, it is shown that multihop communication, with the introduction of a small number of relay nodes, is effective in maintaining network connectivity in scenarios where single-hop communication would suffer unacceptable outages. The proposed multihop MAC protocol accounts for the fact that every link in the WPAN is highly directional, and is shown, using packet level simulations, to maintain high network utilization with low overhead.",
"title": ""
}
] |
scidocsrr
|
adfbcfeacce9b78d0ea346b8d9b3fb52
|
Map-supervised road detection
|
[
{
"docid": "add9821c4680fab8ad8dfacd8ca4236e",
"text": "In this paper, we propose to fuse the LIDAR and monocular image in the framework of conditional random field to detect the road robustly in challenging scenarios. LIDAR points are aligned with pixels in image by cross calibration. Then boosted decision tree based classifiers are trained for image and point cloud respectively. The scores of the two kinds of classifiers are treated as the unary potentials of the corresponding pixel nodes of the random field. The fused conditional random field can be solved efficiently with graph cut. Extensive experiments tested on KITTI-Road benchmark show that our method reaches the state-of-the-art.",
"title": ""
},
{
"docid": "fa88e0d0610f60522fc1140b39fc2972",
"text": "The majority of current image-based road following algorithms operate, at least in part, by assuming the presence of structural or visual cues unique to the roadway. As a result, these algorithms are poorly suited to the task of tracking unstructured roads typical in desert environments. In this paper, we propose a road following algorithm that operates in a selfsupervised learning regime, allowing it to adapt to changing road conditions while making no assumptions about the general structure or appearance of the road surface. An application of optical flow techniques, paired with one-dimensional template matching, allows identification of regions in the current camera image that closely resemble the learned appearance of the road in the recent past. The algorithm assumes the vehicle lies on the road in order to form templates of the road’s appearance. A dynamic programming variant is then applied to optimize the 1-D template match results while enforcing a constraint on the maximum road curvature expected. Algorithm output images, as well as quantitative results, are presented for three distinct road types encountered in actual driving video acquired in the California Mojave Desert.",
"title": ""
}
] |
[
{
"docid": "745bbe075634f40e6c66716a6b877619",
"text": "Collaborative filtering, a widely-used user-centric recommendation technique, predicts an item’s rating by aggregating its ratings from similar users. User similarity is usually calculated by cosine similarity or Pearson correlation coefficient. However, both of them consider only the direction of rating vectors, and suffer from a range of drawbacks. To solve these issues, we propose a novel Bayesian similarity measure based on the Dirichlet distribution, taking into consideration both the direction and length of rating vectors. Further, our principled method reduces correlation due to chance. Experimental results on six real-world data sets show that our method achieves superior accuracy.",
"title": ""
},
{
"docid": "4ba95fbd89f88bdd6277eff955681d65",
"text": "In this paper, new dense dielectric (DD) patch array antenna prototype operating at 28 GHz for the future fifth generation (5G) short-range wireless communications applications is presented. This array antenna is proposed and designed with a standard printed circuit board (PCB) process to be suitable for integration with radio-frequency/microwave circuitry. The proposed structure employs four circular shaped DD patch radiator antenna elements fed by a l-to-4 Wilkinson power divider surrounded by an electromagnetic bandgap (EBG) structure. The DD patch shows better radiation and total efficiencies compared with the metallic patch radiator. For further gain improvement, a dielectric layer of a superstrate is applied above the array antenna. The calculated impedance bandwidth of proposed array antenna ranges from 27.1 GHz to 29.5 GHz for reflection coefficient (Sn) less than -1OdB. The proposed design exhibits good stable radiation patterns over the whole frequency band of interest with a total realized gain more than 16 dBi. Due to the remarkable performance of the proposed array, it can be considered as a good candidate for 5G communication applications.",
"title": ""
},
{
"docid": "2e93d2ba94e0c468634bf99be76706bb",
"text": "Entheses are sites where tendons, ligaments, joint capsules or fascia attach to bone. Inflammation of the entheses (enthesitis) is a well-known hallmark of spondyloarthritis (SpA). As entheses are associated with adjacent, functionally related structures, the concepts of an enthesis organ and functional entheses have been proposed. This is important in interpreting imaging findings in entheseal-related diseases. Conventional radiographs and CT are able to depict the chronic changes associated with enthesitis but are of very limited use in early disease. In contrast, MRI is sensitive for detecting early signs of enthesitis and can evaluate both soft-tissue changes and intraosseous abnormalities of active enthesitis. It is therefore useful for the early diagnosis of enthesitis-related arthropathies and monitoring therapy. Current knowledge and typical MRI features of the most commonly involved entheses of the appendicular skeleton in patients with SpA are reviewed. The MRI appearances of inflammatory and degenerative enthesopathy are described. New options for imaging enthesitis, including whole-body MRI and high-resolution microscopy MRI, are briefly discussed.",
"title": ""
},
{
"docid": "853375477bf531499067eedfe64e6e2d",
"text": "Each July since 2003, the author has directed summer camps that introduce middle school boys and girls to the basic ideas of computer programming. Prior to 2009, the author used Alice 2.0 to introduce object-based computing. In 2009, the author decided to offer these camps using Scratch, primarily to engage repeat campers but also for variety. This paper provides a detailed overview of this outreach, and documents its success at providing middle school girls with a positive, engaging computing experience. It also discusses the merits of Alice and Scratch for such outreach efforts; and the use of these visually oriented programs by students with disabilities, including blind students.",
"title": ""
},
{
"docid": "8bc221213edc863f8cba6f9f5d9a9be0",
"text": "Introduction The literature on business process re-engineering, benchmarking, continuous improvement and many other approaches of modern management is very abundant. One thing which is noticeable, however, is the growing usage of the word “process” in everyday business language. This suggests that most organizations adopt a process-based approach to managing their operations and that business process management (BPM) is a well-established concept. Is this really what takes place? On examination of the literature which refers to BPM, it soon emerged that the use of this concept is not really pervasive and what in fact has been acknowledged hitherto as prevalent business practice is no more than structural changes, the use of systems such as EN ISO 9000 and the management of individual projects.",
"title": ""
},
{
"docid": "3a5d43d86d39966aca2d93d1cf66b13d",
"text": "In the current context of increased surveillance and security, more sophisticated and robust surveillance systems are needed. One idea relies on the use of pairs of video (visible spectrum) and thermal infrared (IR) cameras located around premises of interest. To automate the system, a robust person detection algorithm and the development of an efficient technique enabling the fusion of the information provided by the two sensors becomes necessary and these are described in this chapter. Recently, multi-sensor based image fusion system is a challenging task and fundamental to several modern day image processing applications, such as security systems, defence applications, and intelligent machines. Image fusion techniques have been actively investigated and have wide application in various fields. It is often a vital pre-processing procedure to many computer vision and image processing tasks which are dependent on the acquisition of imaging data via sensors, such as IR and visible. One such task is that of human detection. To detect humans with an artificial system is difficult for a number of reasons as shown in Figure 1 (Gavrila, 2001). The main challenge for a vision-based pedestrian detector is the high degree of variability with the human appearance due to articulated motion, body size, partial occlusion, inconsistent cloth texture, highly cluttered backgrounds and changing lighting conditions.",
"title": ""
},
{
"docid": "33b37422ace8a300d53d4896de6bbb6f",
"text": "Digital investigations of the real world through point clouds and derivatives are changing how curators, cultural heritage researchers and archaeologists work and collaborate. To progressively aggregate expertise and enhance the working proficiency of all professionals, virtual reconstructions demand adapted tools to facilitate knowledge dissemination. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. In this paper, we review the state of the art of point cloud integration within archaeological applications, giving an overview of 3D technologies for heritage, digital exploitation and case studies showing the assimilation status within 3D GIS. Identified issues and new perspectives are addressed through a knowledge-based point cloud processing framework for multi-sensory data, and illustrated on mosaics and quasi-planar objects. A new acquisition, pre-processing, segmentation and ontology-based classification method on hybrid point clouds from both terrestrial laser scanning and dense image matching is proposed to enable reasoning for information extraction. Experiments in detection and semantic enrichment show promising results of 94% correct semantization. Then, we integrate the metadata in an archaeological smart point cloud data structure allowing spatio-semantic queries related to CIDOC-CRM. Finally, a WebGL prototype is presented that leads to efficient communication between actors by proposing optimal 3D data visualizations as a basis on which interaction can grow.",
"title": ""
},
{
"docid": "cb0021ec58487e3dabc445f75918c974",
"text": "This document includes supplementary material for the semi-supervised approach towards framesemantic parsing for unknown predicates (Das and Smith, 2011). We include the names of the test documents used in the study, plot the results for framesemantic parsing while varying the hyperparameter that is used to determine the number of top frames to be selected from the posterior distribution over each target of a constructed graph and argue why the semi-supervised self-training baseline did not perform well on the task.",
"title": ""
},
{
"docid": "90bf5834a6e78ed946a6c898f1c1905e",
"text": "Many grid connected power electronic systems, such as STATCOMs, UPFCs, and distributed generation system interfaces, use a voltage source inverter (VSI) connected to the supply network through a filter. This filter, typically a series inductance, acts to reduce the switching harmonics entering the distribution network. An alternative filter is a LCL network, which can achieve reduced levels of harmonic distortion at lower switching frequencies and with less inductance, and therefore has potential benefits for higher power applications. However, systems incorporating LCL filters require more complex control strategies and are not commonly presented in literature. This paper proposes a robust strategy for regulating the grid current entering a distribution network from a three-phase VSI system connected via a LCL filter. The strategy integrates an outer loop grid current regulator with inner capacitor current regulation to stabilize the system. A synchronous frame PI current regulation strategy is used for the outer grid current control loop. Linear analysis, simulation, and experimental results are used to verify the stability of the control algorithm across a range of operating conditions. Finally, expressions for “harmonic impedance” of the system are derived to study the effects of supply voltage distortion on the harmonic performance of the system.",
"title": ""
},
{
"docid": "c1c9f0a61b8ec92d4904fa0fd84a4073",
"text": "This work presents a Brain-Computer Interface (BCI) based on the Steady-State Visual Evoked Potential (SSVEP) that can discriminate four classes once per second. A statistical test is used to extract the evoked response and a decision tree is used to discriminate the stimulus frequency. Designed according such approach, volunteers were capable to online operate a BCI with hit rates varying from 60% to 100%. Moreover, one of the volunteers could guide a robotic wheelchair through an indoor environment using such BCI. As an additional feature, such BCI incorporates a visual feedback, which is essential for improving the performance of the whole system. All of this aspects allow to use this BCI to command a robotic wheelchair efficiently.",
"title": ""
},
{
"docid": "909d9d1b9054586afc4b303e94acae73",
"text": "Humans learn to solve tasks of increasing complexity by building on top of previously acquired knowledge. Typically, there exists a natural progression in the tasks that we learn – most do not require completely independent solutions, but can be broken down into simpler subtasks. We propose to represent a solver for each task as a neural module that calls existing modules (solvers for simpler tasks) in a program-like manner. Lower modules are a black box to the calling module, and communicate only via a query and an output. Thus, a module for a new task learns to query existing modules and composes their outputs in order to produce its own output. Each module also contains a residual component that learns to solve aspects of the new task that lower modules cannot solve. Our model effectively combines previous skill-sets, does not suffer from forgetting, and is fully differentiable. We test our model in learning a set of visual reasoning tasks, and demonstrate state-ofthe-art performance in Visual Question Answering, the highest-level task in our task set. By evaluating the reasoning process using non-expert human judges, we show that our model is more interpretable than an attention-based baseline.",
"title": ""
},
{
"docid": "fd29a4adc5eba8025da48eb174bc0817",
"text": "Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible.",
"title": ""
},
{
"docid": "0a5ae1eb45404d6a42678e955c23116c",
"text": "This study assessed the validity of the Balance Scale by examining: how Scale scores related to clinical judgements and self-perceptions of balance, laboratory measures of postural sway and external criteria reflecting balancing ability; if scores could predict falls in the elderly; and how they related to motor and functional performance in stroke patients. Elderly residents (N = 113) were assessed for functional performance and balance regularly over a nine-month period. Occurrence of falls was monitored for a year. Acute stroke patients (N = 70) were periodically rated for functional independence, motor performance and balance for over three months. Thirty-one elderly subjects were assessed by clinical and laboratory indicators reflecting balancing ability. The Scale correlated moderately with caregiver ratings, self-ratings and laboratory measures of sway. Differences in mean Scale scores were consistent with the use of mobility aids by elderly residents and differentiated stroke patients by location of follow-up. Balance scores predicted the occurrence of multiple falls among elderly residents and were strongly correlated with functional and motor performance in stroke patients.",
"title": ""
},
{
"docid": "c1cdc9bb29660e910ccead445bcc896d",
"text": "This paper describes an efficient technique for com' puting a hierarchical representation of the objects contained in a complex 3 0 scene. First, an adjacency graph keeping the costs of grouping the different pairs of objects in the scene is built. Then the minimum spanning tree (MST) of that graph is determined. A binary clustering tree (BCT) is obtained from the MS'I: Finally, a merging stage joins the adjacent nodes in the BCT which have similar costs. The final result is an n-ary tree which defines an intuitive clustering of the objects of the scene at different levels of abstraction. Experimental results with synthetic 3 0 scenes are presented.",
"title": ""
},
{
"docid": "6a6bd93714e6e77a7b9834e8efee943a",
"text": "Many information systems involve data about people. In order to reliably associate data with particular individuals, it is necessary that an effective and efficient identification scheme be established and maintained. There is remarkably little in the information technology literature concerning human identification. This paper seeks to overcome that deficiency, by undertaking a survey of human identity and human identification. The techniques discussed include names, codes, knowledge-based and token-based id, and biometrics. The key challenge to management is identified as being to devise a scheme which is practicable and economic, and of sufficiently high integrity to address the risks the organisation confronts in its dealings with people. It is proposed that much greater use be made of schemes which are designed to afford people anonymity, or enable them to use multiple identities or pseudonyms, while at the same time protecting the organisation's own interests. Multi-purpose and inhabitant registration schemes are described, and the recurrence of proposals to implement and extent them is noted. Public policy issues are identified. Of especial concern is the threat to personal privacy that the general-purpose use of an inhabitant registrant scheme represents. It is speculated that, where such schemes are pursued energetically, the reaction may be strong enough to threaten the social fabric.",
"title": ""
},
{
"docid": "cbc6bd586889561cc38696f758ad97d2",
"text": "Introducing a new hobby for other people may inspire them to join with you. Reading, as one of mutual hobby, is considered as the very easy hobby to do. But, many people are not interested in this hobby. Why? Boring is the reason of why. However, this feel actually can deal with the book and time of you reading. Yeah, one that we will refer to break the boredom in reading is choosing design of experiments statistical principles of research design and analysis as the reading material.",
"title": ""
},
{
"docid": "0f5511aaed3d6627671a5e9f68df422a",
"text": "As people document more of their lives online, some recent systems are encouraging people to later revisit those recordings, a practice we're calling technology-mediated reflection (TMR). Since we know that unmediated reflection benefits psychological well-being, we explored whether and how TMR affects well-being. We built Echo, a smartphone application for recording everyday experiences and reflecting on them later. We conducted three system deployments with 44 users who generated over 12,000 recordings and reflections. We found that TMR improves well-being as assessed by four psychological metrics. By analyzing the content of these entries we discovered two mechanisms that explain this improvement. We also report benefits of very long-term TMR.",
"title": ""
},
{
"docid": "5dcbebce421097f887f43669e1294b6f",
"text": "The paper syncretizes the fundamental concept of the Sea Computing model in Internet of Things and the routing protocol of the wireless sensor network, and proposes a new routing protocol CASCR (Context-Awareness in Sea Computing Routing Protocol) for Internet of Things, based on context-awareness which belongs to the key technologies of Internet of Things. Furthermore, the paper describes the details on the protocol in the work flow, data structure and quantitative algorithm and so on. Finally, the simulation is given to analyze the work performance of the protocol CASCR. Theoretical analysis and experiment verify that CASCR has higher energy efficient and longer lifetime than the congeneric protocols. The paper enriches the theoretical foundation and makes some contribution for wireless sensor network transiting to Internet of Things in this research phase.",
"title": ""
},
{
"docid": "c581d1300bf07663fcfd8c704450db09",
"text": "This research aimed at the case of customers’ default payments in Taiwan and compares the predictive accuracy of probability of default among six data mining methods. From the perspective of risk management, the result of predictive accuracy of the estimated probability of default will be more valuable than the binary result of classification credible or not credible clients. Because the real probability of default is unknown, this study presented the novel ‘‘Sorting Smoothing Method” to estimate the real probability of default. With the real probability of default as the response variable (Y), and the predictive probability of default as the independent variable (X), the simple linear regression result (Y = A + BX) shows that the forecasting model produced by artificial neural network has the highest coefficient of determination; its regression intercept (A) is close to zero, and regression coefficient (B) to one. Therefore, among the six data mining techniques, artificial neural network is the only one that can accurately estimate the real probability of default. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5fc8afbe7d55af3274d849d1576d3b13",
"text": "It is a difficult task to classify images with multiple class labels using only a small number of labeled examples, especially when the label (class) distribution is imbalanced. Emotion classification is such an example of imbalanced label distribution, because some classes of emotions like disgusted are relatively rare comparing to other labels like happy or sad. In this paper, we propose a data augmentation method using generative adversarial networks (GAN). It can complement and complete the data manifold and find better margins between neighboring classes. Specifically, we design a framework using a CNN model as the classifier and a cycle-consistent adversarial networks (CycleGAN) as the generator. In order to avoid gradient vanishing problem, we employ the least-squared loss as adversarial loss. We also propose several evaluation methods on three benchmark datasets to validate GAN’s performance. Empirical results show that we can obtain 5%∼10% increase in the classification accuracy after employing the GAN-based data augmentation techniques.",
"title": ""
}
] |
scidocsrr
|
9d9ba5dbd1001814e255be0b16c9393c
|
Polyglot Neural Language Models: A Case Study in Cross-Lingual Phonetic Representation Learning
|
[
{
"docid": "2917b7b1453f9e6386d8f47129b605fb",
"text": "We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form–function relationship in language, our “composed” word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).",
"title": ""
},
{
"docid": "250d19b0185d69ec74b8f87e112b9570",
"text": "In this paper, we investigate the application of recurrent neural network language models (RNNLM) and factored language models (FLM) to the task of language modeling for Code-Switching speech. We present a way to integrate partof-speech tags (POS) and language information (LID) into these models which leads to significant improvements in terms of perplexity. Furthermore, a comparison between RNNLMs and FLMs and a detailed analysis of perplexities on the different backoff levels are performed. Finally, we show that recurrent neural networks and factored language models can be combined using linear interpolation to achieve the best performance. The final combined language model provides 37.8% relative improvement in terms of perplexity on the SEAME development set and a relative improvement of 32.7% on the evaluation set compared to the traditional n-gram language model.",
"title": ""
}
] |
[
{
"docid": "b56d144f1cda6378367ea21e9c76a39e",
"text": "The main objective of our work has been to develop and then propose a new and unique methodology useful in developing the various features of heart rate variability (HRV) and carotid arterial wall thickness helpful in diagnosing cardiovascular disease. We also propose a suitable prediction model to enhance the reliability of medical examinations and treatments for cardiovascular disease. We analyzed HRV for three recumbent postures. The interaction effects between the recumbent postures and groups of normal people and heart patients were observed based on HRV indexes. We also measured intima-media of carotid arteries and used measurements of arterial wall thickness as other features. Patients underwent carotid artery scanning using high-resolution ultrasound devised in a previous study. In order to extract various features, we tested six classification methods. As a result, CPAR and SVM (gave about 85%-90% goodness of fit) outperforming the other classifiers.",
"title": ""
},
{
"docid": "ab7b09f6779017479b12b20035ad2532",
"text": "This article presents a 4:1 wide-band balun that won the student design competition for wide-band baluns held during the 2016 IEEE Microwave Theory and Techniques Society (MTT-S) International Microwave Symposium (IMS2016) in San Francisco, California. For this contest, sponsored by Technical Committee MTT-17, participants were required to implement and evaluate their own baluns, with the winning entry achieving the widest bandwidth while satisfying the conditions of the competition rules during measurements at IMS2016. Some of the conditions were revised for this year's competition compared with previous competitions as follows.",
"title": ""
},
{
"docid": "cf264a124cc9f68cf64cacb436b64fa3",
"text": "Clustering validation has long been recognized as one of the vital issues essential to the success of clustering applications. In general, clustering validation can be categorized into two classes, external clustering validation and internal clustering validation. In this paper, we focus on internal clustering validation and present a detailed study of 11 widely used internal clustering validation measures for crisp clustering. From five conventional aspects of clustering, we investigate their validation properties. Experiment results show that S\\_Dbw is the only internal validation measure which performs well in all five aspects, while other measures have certain limitations in different application scenarios.",
"title": ""
},
{
"docid": "3c9b28e47b492e329043941f4ff088b7",
"text": "The importance of motion in attracting attention is well known. While watching videos, where motion is prevalent, how do we quantify the regions that are motion salient? In this paper, we investigate the role of motion in attention and compare it with the influence of other low-level features like image orientation and intensity. We propose a framework for motion saliency. In particular, we integrate motion vector information with spatial and temporal coherency to generate a motion attention map. The results show that our model achieves good performance in identifying regions that are moving and salient. We also find motion to have greater influence on saliency than other low-level features when watching videos.",
"title": ""
},
{
"docid": "672fa729e41d20bdd396f9de4ead36b3",
"text": "Data that encompasses relationships is represented by a graph of interconnected nodes. Social network analysis is the study of such graphs which examines questions related to structures and patterns that can lead to the understanding of the data and predicting the trends of social networks. Static analysis, where the time of interaction is not considered (i.e., the network is frozen in time), misses the opportunity to capture the evolutionary patterns in dynamic networks. Specifically, detecting the community evolutions, the community structures that changes in time, provides insight into the underlying behaviour of the network. Recently, a number of researchers have started focusing on identifying critical events that characterize the evolution of communities in dynamic scenarios. In this paper, we present a framework for modeling and detecting community evolution in social networks, where a series of significant events is defined for each community. A community matching algorithm is also proposed to efficiently identify and track similar communities over time. We also define the concept of meta community which is a series of similar communities captured in different timeframes and detected by our matching algorithm. We illustrate the capabilities and potential of our framework by applying it to two real datasets. Furthermore, the events detected by the framework is supplemented by extraction and investigation of the topics discovered for each community. c © 2011 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "3ae9da3a27b00fb60f9e8771de7355fe",
"text": "In the past decade, graph-based structures have penetrated nearly every aspect of our lives. The detection of anomalies in these networks has become increasingly important, such as in exposing infected endpoints in computer networks or identifying socialbots. In this study, we present a novel unsupervised two-layered meta-classifier that can detect irregular vertices in complex networks solely by utilizing topology-based features. Following the reasoning that a vertex with many improbable links has a higher likelihood of being anomalous, we applied our method on 10 networks of various scales, from a network of several dozen students to online networks with millions of vertices. In every scenario, we succeeded in identifying anomalous vertices with lower false positive rates and higher AUCs compared to other prevalent methods. Moreover, we demonstrated that the presented algorithm is generic, and efficient both in revealing fake users and in disclosing the influential people in social networks.",
"title": ""
},
{
"docid": "ace2fa767a14ee32f596256ebdf9554f",
"text": "Computing systems have steadily evolved into more complex, interconnected, heterogeneous entities. Ad-hoc techniques are most often used in designing them. Furthermore, researchers and designers from both academia and industry have focused on vertical approaches to emphasizing the advantages of one specific feature such as fault tolerance, security or performance. Such approaches led to very specialized computing systems and applications. Autonomic systems, as an alternative approach, can control and manage themselves automatically with minimal intervention by users or system administrators. This paper presents an autonomic framework in developing and implementing autonomic computing services and applications. Firstly, it shows how to apply this framework to autonomically manage the security of networks. Then an approach is presented to develop autonomic components from existing legacy components such as software modules/applications or hardware resources (router, processor, server, etc.). Experimental evaluation of the prototype shows that the system can be programmed dynamically to enable the components to operate autonomously.",
"title": ""
},
{
"docid": "18b173283a1eb58170982504bec7484f",
"text": "Database forensics is a domain that uses database content and metadata to reveal malicious activities on database systems in an Internet of Things environment. Although the concept of database forensics has been around for a while, the investigation of cybercrime activities and cyber breaches in an Internet of Things environment would benefit from the development of a common investigative standard that unifies the knowledge in the domain. Therefore, this paper proposes common database forensic investigation processes using a design science research approach. The proposed process comprises four phases, namely: 1) identification; 2) artefact collection; 3) artefact analysis; and 4) the documentation and presentation process. It allows the reconciliation of the concepts and terminologies of all common database forensic investigation processes; hence, it facilitates the sharing of knowledge on database forensic investigation among domain newcomers, users, and practitioners.",
"title": ""
},
{
"docid": "97446299cdba049d32fa9c7333de96c5",
"text": "Wetlands all over the world have been lost or are threatened in spite of various international agreements and national policies. This is caused by: (1) the public nature of many wetlands products and services; (2) user externalities imposed on other stakeholders; and (3) policy intervention failures that are due to a lack of consistency among government policies in different areas (economics, environment, nature protection, physical planning, etc.). All three causes are related to information failures which in turn can be linked to the complexity and ‘invisibility’ of spatial relationships among groundwater, surface water and wetland vegetation. Integrated wetland research combining social and natural sciences can help in part to solve the information failure to achieve the required consistency across various government policies. An integrated wetland research framework suggests that a combination of economic valuation, integrated modelling, stakeholder analysis, and multi-criteria evaluation can provide complementary insights into sustainable and welfare-optimising wetland management and policy. Subsequently, each of the various www.elsevier.com/locate/ecolecon * Corresponding author. Tel.: +46-8-6739540; fax: +46-8-152464. E-mail address: [email protected] (T. Söderqvist). 0921-8009/00/$ see front matter © 2000 Elsevier Science B.V. All rights reserved. PII: S 0921 -8009 (00 )00164 -6 R.K. Turner et al. / Ecological Economics 35 (2000) 7–23 8 components of such integrated wetland research is reviewed and related to wetland management policy. © 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "0b32bf3a89cf144a8b440156b2b95621",
"text": "Today’s Cyber-Physical Systems (CPSs) are large, complex, and affixed with networked sensors and actuators that are targets for cyber-attacks. Conventional detection techniques are unable to deal with the increasingly dynamic and complex nature of the CPSs. On the other hand, the networked sensors and actuators generate large amounts of data streams that can be continuously monitored for intrusion events. Unsupervised machine learning techniques can be used to model the system behaviour and classify deviant behaviours as possible attacks. In this work, we proposed a novel Generative Adversarial Networks-based Anomaly Detection (GAN-AD) method for such complex networked CPSs. We used LSTM-RNN in our GAN to capture the distribution of the multivariate time series of the sensors and actuators under normal working conditions of a CPS. Instead of treating each sensor’s and actuator’s time series independently, we model the time series of multiple sensors and actuators in the CPS concurrently to take into account of potential latent interactions between them. To exploit both the generator and the discriminator of our GAN, we deployed the GAN-trained discriminator together with the residuals between generator-reconstructed data and the actual samples to detect possible anomalies in the complex CPS. We used our GAN-AD to distinguish abnormal attacked situations from normal working conditions for a complex six-stage Secure Water Treatment (SWaT) system. Experimental results showed that the proposed strategy is effective in identifying anomalies caused by various attacks with high detection rate and low false positive rate as compared to existing methods.",
"title": ""
},
{
"docid": "b9bb07dd039c0542a7309f2291732f82",
"text": "Recent progress in acquiring shape from range data permits the acquisition of seamless million-polygon meshes from physical models. In this paper, we present an algorithm and system for converting dense irregular polygon meshes of arbitrary topology into tensor product B-spline surface patches with accompanying displacement maps. This choice of representation yields a coarse but efficient model suitable for animation and a fine but more expensive model suitable for rendering. The first step in our process consists of interactively painting patch boundaries over a rendering of the mesh. In many applications, interactive placement of patch boundaries is considered part of the creative process and is not amenable to automation. The next step is gridded resampling of each boundedsection of the mesh. Our resampling algorithm lays a grid of springs across the polygon mesh, then iterates between relaxing this grid and subdividing it. This grid provides a parameterization for the mesh section, which is initially unparameterized. Finally, we fit a tensor product B-spline surface to the grid. We also output a displacement map for each mesh section, which represents the error between our fitted surface and the spring grid. These displacement maps are images; hence this representation facilitates the use of image processing operators for manipulating the geometric detail of an object. They are also compatible with modern photo-realistic rendering systems. Our resampling and fitting steps are fast enough to surface a million polygon mesh in under 10 minutes important for an interactive system. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling —curve, surface and object representations; I.3.7[Computer Graphics]:Three-Dimensional Graphics and Realism—texture; J.6[Computer-Aided Engineering]:ComputerAided Design (CAD); G.1.2[Approximation]:Spline Approximation Additional",
"title": ""
},
{
"docid": "e5f4b8d4e02f68c90fe4b18dfed2719e",
"text": "The evolution of modern electronic devices is outpacing the scalability and effectiveness of the tools used to analyze digital evidence recovered from them. Indeed, current digital forensic techniques and tools are unable to handle large datasets in an efficient manner. As a result, the time and effort required to conduct digital forensic investigations are increasing. This paper describes a promising digital forensic visualization framework that displays digital evidence in a simple and intuitive manner, enhancing decision making and facilitating the explanation of phenomena in evidentiary data.",
"title": ""
},
{
"docid": "d41ac7c4301e5efa591f1949327acb38",
"text": "During even the most quiescent behavioral periods, the cortex and thalamus express rich spontaneous activity in the form of slow (<1 Hz), synchronous network state transitions. Throughout this so-called slow oscillation, cortical and thalamic neurons fluctuate between periods of intense synaptic activity (Up states) and almost complete silence (Down states). The two decades since the original characterization of the slow oscillation in the cortex and thalamus have seen considerable advances in deciphering the cellular and network mechanisms associated with this pervasive phenomenon. There are, nevertheless, many questions regarding the slow oscillation that await more thorough illumination, particularly the mechanisms by which Up states initiate and terminate, the functional role of the rhythmic activity cycles in unconscious or minimally conscious states, and the precise relation between Up states and the activated states associated with waking behavior. Given the substantial advances in multineuronal recording and imaging methods in both in vivo and in vitro preparations, the time is ripe to take stock of our current understanding of the slow oscillation and pave the way for future investigations of its mechanisms and functions. My aim in this Review is to provide a comprehensive account of the mechanisms and functions of the slow oscillation, and to suggest avenues for further exploration.",
"title": ""
},
{
"docid": "6c4b59e0e8cc42faea528dc1fe7a09ed",
"text": "Grounded Theory is a powerful research method for collecting and analysing research data. It was ‘discovered’ by Glaser & Strauss (1967) in the 1960s but is still not widely used or understood by researchers in some industries or PhD students in some science disciplines. This paper demonstrates the steps in the method and describes the difficulties encountered in applying Grounded Theory (GT). A fundamental part of the analysis method in GT is the derivation of codes, concepts and categories. Codes and coding are explained and illustrated in Section 3. Merging the codes to discover emerging concepts is a central part of the GT method and is shown in Section 4. Glaser and Strauss’s constant comparison step is applied and illustrated so that the emerging categories can be seen coming from the concepts and leading to the emergent theory grounded in the data in Section 5. However, the initial applications of the GT method did have difficulties. Problems encountered when using the method are described to inform the reader of the realities of the approach. The data used in the illustrative analysis comes from recent IS/IT Case Study research into configuration management (CM) and the use of commercially available computer products (COTS). Why and how the GT approach was appropriate is explained in Section 6. However, the focus is on reporting GT as a research method rather than the results of the Case Study.",
"title": ""
},
{
"docid": "31cf550d44266e967716560faeb30f2b",
"text": "The explosion in workload complexity and the recent slow-down in Moore’s law scaling call for new approaches towards efficient computing. Researchers are now beginning to use recent advances in machine learning in software optimizations, augmenting or replacing traditional heuristics and data structures. However, the space of machine learning for computer hardware architecture is only lightly explored. In this paper, we demonstrate the potential of deep learning to address the von Neumann bottleneck of memory performance. We focus on the critical problem of learning memory access patterns, with the goal of constructing accurate and efficient memory prefetchers. We relate contemporary prefetching strategies to n-gram models in natural language processing, and show how recurrent neural networks can serve as a drop-in replacement. On a suite of challenging benchmark datasets, we find that neural networks consistently demonstrate superior performance in terms of precision and recall. This work represents the first step towards practical neural-network based prefetching, and opens a wide range of exciting directions for machine learning in computer architecture research.",
"title": ""
},
{
"docid": "f32ed82c3ab67c711f50394eea2b9106",
"text": "Concept-to-text generation refers to the task of automatically producing textual output from non-linguistic input. We present a joint model that captures content selection (“what to say”) and surface realization (“how to say”) in an unsupervised domain-independent fashion. Rather than breaking up the generation process into a sequence of local decisions, we define a probabilistic context-free grammar that globally describes the inherent structure of the input (a corpus of database records and text describing some of them). We recast generation as the task of finding the best derivation tree for a set of database records and describe an algorithm for decoding in this framework that allows to intersect the grammar with additional information capturing fluency and syntactic well-formedness constraints. Experimental evaluation on several domains achieves results competitive with state-of-the-art systems that use domain specific constraints, explicit feature engineering or labeled data.",
"title": ""
},
{
"docid": "5fc3cbcca7aba6f48da7df299de4abe2",
"text": "1. We studied the responses of 103 neurons in visual area V4 of anesthetized macaque monkeys to two novel classes of visual stimuli, polar and hyperbolic sinusoidal gratings. We suspected on both theoretical and experimental grounds that these stimuli would be useful for characterizing cells involved in intermediate stages of form analysis. Responses were compared with those obtained with conventional Cartesian sinusoidal gratings. Five independent, quantitative analyses of neural responses were carried out on the entire population of cells. 2. For each cell, responses to the most effective Cartesian, polar, and hyperbolic grating were compared directly. In 18 of 103 cells, the peak response evoked by one stimulus class was significantly different from the peak response evoked by the remaining two classes. Of the remaining 85 cells, 74 had response peaks for the three stimulus classes that were all within a factor of 2 of one another. 3. An information-theoretic analysis of the trial-by-trial responses to each stimulus showed that all but two cells transmitted significant information about the stimulus set as a whole. Comparison of the information transmitted about each stimulus class showed that 23 of 103 cells transmitted a significantly different amount of information about one class than about the remaining two classes. Of the remaining 80 cells, 55 had information transmission rates for the three stimulus classes that were all within a factor of 2 of one another. 4. To identify cells that had orderly tuning profiles in the various stimulus spaces, responses to each stimulus class were fit with a simple Gaussian model. Tuning curves were successfully fit to the data from at least one stimulus class in 98 of 103 cells, and such fits were obtained for at least two classes in 87 cells. Individual neurons showed a wide range of tuning profiles, with response peaks scattered throughout the various stimulus spaces; there were no major differences in the distributions of the widths or positions of tuning curves obtained for the different stimulus classes. 5. Neurons were classified according to their response profiles across the stimulus set with two objective methods, hierarchical cluster analysis and multidimensional scaling. These two analyses produced qualitatively similar results. The most distinct group of cells was highly selective for hyperbolic gratings. The majority of cells fell into one of two groups that were selective for polar gratings: one selective for radial gratings and one selective for concentric or spiral gratings. There was no group whose primary selectivity was for Cartesian gratings. 6. To determine whether cells belonging to identified classes were anatomically clustered, we compared the distribution of classified cells across electrode penetrations with the distribution that would be expected if the cells were distributed randomly. Cells with similar response profiles were often anatomically clustered. 7. A position test was used to determine whether response profiles were sensitive to precise stimulus placement. A subset of Cartesian and non-Cartesian gratings was presented at several positions in and near the receptive field. The test was run on 13 cells from the present study and 28 cells from an earlier study. All cells showed a significant degree of invariance in their selectivity across changes in stimulus position of up to 0.5 classical receptive field diameters. 8. A length and width test was used to determine whether cells preferring non-Cartesian gratings were selective for Cartesian grating length or width. Responses to Cartesian gratings shorter or narrower than the classical receptive field were compared with those obtained with full-field Cartesian and non-Cartesian gratings in 29 cells. Of the four cells that had shown significant preferences for non-Cartesian gratings in the main test, none showed tuning for Cartesian grating length or width that would account for their non-Cartesian res",
"title": ""
},
{
"docid": "68a5192778ae203ea1e31ba4e29b4330",
"text": "Mobile crowdsensing is becoming a vital technique for environment monitoring, infrastructure management, and social computing. However, deploying mobile crowdsensing applications in large-scale environments is not a trivial task. It creates a tremendous burden on application developers as well as mobile users. In this paper we try to reveal the barriers hampering the scale-up of mobile crowdsensing applications, and to offer our initial thoughts on the potential solutions to lowering the barriers.",
"title": ""
},
{
"docid": "e1dd2a719d3389a11323c5245cd2b938",
"text": "Secure identity tokens such as Electronic Identity (eID) cards are emerging everywhere. At the same time user-centric identity management gains acceptance. Anonymous credential schemes are the optimal realization of user-centricity. However, on inexpensive hardware platforms, typically used for eID cards, these schemes could not be made to meet the necessary requirements such as future-proof key lengths and transaction times on the order of 10 seconds. The reasons for this is the need for the hardware platform to be standardized and certified. Therefore an implementation is only possible as a Java Card applet. This results in severe restrictions: little memory (transient and persistent), an 8-bit CPU, and access to hardware acceleration for cryptographic operations only by defined interfaces such as RSA encryption operations.\n Still, we present the first practical implementation of an anonymous credential system on a Java Card 2.2.1. We achieve transaction times that are orders of magnitudes faster than those of any prior attempt, while raising the bar in terms of key length and trust model. Our system is the first one to act completely autonomously on card and to maintain its properties in the face of an untrusted terminal. In addition, we provide a formal system specification and share our solution strategies and experiences gained and with the Java Card.",
"title": ""
},
{
"docid": "7b5f5da25db515f5dcc48b2722cf00b4",
"text": "The performance of adversarial dialogue generation models relies on the quality of the reward signal produced by the discriminator. The reward signal from a poor discriminator can be very sparse and unstable, which may lead the generator to fall into a local optimum or to produce nonsense replies. To alleviate the first problem, we first extend a recently proposed adversarial dialogue generation method to an adversarial imitation learning solution. Then, in the framework of adversarial inverse reinforcement learning, we propose a new reward model for dialogue generation that can provide a more accurate and precise reward signal for generator training. We evaluate the performance of the resulting model with automatic metrics and human evaluations in two annotation settings. Our experimental results demonstrate that our model can generate more high-quality responses and achieve higher overall performance than the state-of-the-art.",
"title": ""
}
] |
scidocsrr
|
f490ccf2586f3c7e56ffe965453675c3
|
Eclectic domain mixing for effective adaptation in action spaces
|
[
{
"docid": "662c29e37706092cfa604bf57da11e26",
"text": "Article history: Available online 8 January 2014",
"title": ""
},
{
"docid": "adb64a513ab5ddd1455d93fc4b9337e6",
"text": "Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.",
"title": ""
}
] |
[
{
"docid": "619616b551adddc2819d40d63ce4e67d",
"text": "Codependency has been defined as an extreme focus on relationships, caused by a stressful family background (J. L. Fischer, L. Spann, & D. W. Crawford, 1991). In this study the authors assessed the relationship of the Spann-Fischer Codependency Scale (J. L. Fischer et al., 1991) and the Potter-Efron Codependency Assessment (L. A. Potter-Efron & P. S. Potter-Efron, 1989) with self-reported chronic family stress and family background. Students (N = 257) completed 2 existing self-report codependency measures and provided family background information. Results indicated that women had higher codependency scores than men on the Spann-Fischer scale. Students with a history of chronic family stress (with an alcoholic, mentally ill, or physically ill parent) had significantly higher codependency scores on both scales. The findings suggest that other types of family stressors, not solely alcoholism, may be predictors of codependency.",
"title": ""
},
{
"docid": "53e6fe645eb83bcc0f86638ee7ce5578",
"text": "Multi-hop reading comprehension focuses on one type of factoid question, where a system needs to properly integrate multiple pieces of evidence to correctly answer a question. Previous work approximates global evidence with local coreference information, encoding coreference chains with DAG-styled GRU layers within a gated-attention reader. However, coreference is limited in providing information for rich inference. We introduce a new method for better connecting global evidence, which forms more complex graphs compared to DAGs. To perform evidence integration on our graphs, we investigate two recent graph neural networks, namely graph convolutional network (GCN) and graph recurrent network (GRN). Experiments on two standard datasets show that richer global information leads to better answers. Our method performs better than all published results on these datasets.",
"title": ""
},
{
"docid": "cadc31481c83e7fc413bdfb5d7bfd925",
"text": "A hierarchical model of approach and avoidance achievement motivation was proposed and tested in a college classroom. Mastery, performance-approach, and performance-avoidance goals were assessed and their antecedents and consequences examined. Results indicated that mastery goals were grounded in achievement motivation and high competence expectancies; performance-avoidance goals, in fear of failure and low competence expectancies; and performance-approach goals, in ach.ievement motivation, fear of failure, and high competence expectancies. Mastery goals facilitated intrinsic motivation, performance-approach goals enhanced graded performance, and performanceavoidance goals proved inimical to both intrinsic motivation and graded performance. The proposed model represents an integration of classic and contemporary approaches to the study of achievement motivation.",
"title": ""
},
{
"docid": "f047fa049fad96aa43211bef45c375d7",
"text": "Graph processing is increasingly used in knowledge economies and in science, in advanced marketing, social networking, bioinformatics, etc. A number of graph-processing systems, including the GPU-enabled Medusa and Totem, have been developed recently. Understanding their performance is key to system selection, tuning, and improvement. Previous performance evaluation studies have been conducted for CPU-based graph-processing systems, such as Graph and GraphX. Unlike them, the performance of GPU-enabled systems is still not thoroughly evaluated and compared. To address this gap, we propose an empirical method for evaluating GPU-enabled graph-processing systems, which includes new performance metrics and a selection of new datasets and algorithms. By selecting 9 diverse graphs and 3 typical graph-processing algorithms, we conduct a comparative performance study of 3 GPU-enabled systems, Medusa, Totem, and MapGraph. We present the first comprehensive evaluation of GPU-enabled systems with results giving insight into raw processing power, performance breakdown into core components, scalability, and the impact on performance of system-specific optimization techniques and of the GPU generation. We present and discuss many findings that would benefit users and developers interested in GPU acceleration for graph processing.",
"title": ""
},
{
"docid": "bd89993bebdbf80b516626881d459333",
"text": "Creating a mobile application often requires the developers to create one for Android och one for iOS, the two leading operating systems for mobile devices. The two applications may have the same layout and logic but several components of the user interface (UI) will differ and the applications themselves need to be developed in two different languages. This process is gruesome since it is time consuming to create two applications and it requires two different sets of knowledge. There have been attempts to create techniques, services or frameworks in order to solve this problem but these hybrids have not been able to provide a native feeling of the resulting applications. This thesis has evaluated the newly released framework React Native that can create both iOS and Android applications by compiling the code written in React. The resulting applications can share code and consists of the UI components which are unique for each platform. The thesis focused on Android and tried to replicate an existing Android application in order to measure user experience and performance. The result was surprisingly positive for React Native as some user could not tell the two applications apart and nearly all users did not mind using a React Native application. The performance evaluation measured GPU frequency, CPU load, memory usage and power consumption. Nearly all measurements displayed a performance advantage for the Android application but the differences were not protruding. The overall experience is that React Native a very interesting framework that can simplify the development process for mobile applications to a high degree. As long as the application itself is not too complex, the development is uncomplicated and one is able to create an application in very short time and be compiled to both Android and iOS. First of all I would like to express my deepest gratitude for Valtech who aided me throughout the whole thesis with books, tools and knowledge. They supplied me with two very competent consultants Alexander Lindholm and Tomas Tunström which made it possible for me to bounce off ideas and in the end having a great thesis. Furthermore, a big thanks to the other students at Talangprogrammet who have supported each other and me during this period of time and made it fun even when it was as most tiresome. Furthermore I would like to thank my examiner Erik Berglund at Linköpings university who has guided me these last months and provided with insightful comments regarding the paper. Ultimately I would like to thank my family who have always been there to support me and especially my little brother who is my main motivation in life.",
"title": ""
},
{
"docid": "3ba87a9a84f317ef3fd97c79f86340c1",
"text": "Programmers often need to reason about how a program evolved between two or more program versions. Reasoning about program changes is challenging as there is a significant gap between how programmers think about changes and how existing program differencing tools represent such changes. For example, even though modification of a locking protocol is conceptually simple and systematic at a code level, diff extracts scattered text additions and deletions per file. To enable programmers to reason about program differences at a high level, this paper proposes a rule-based program differencing approach that automatically discovers and represents systematic changes as logic rules. To demonstrate the viability of this approach, we instantiated this approach at two different abstraction levels in Java: first at the level of application programming interface (API) names and signatures, and second at the level of code elements (e.g., types, methods, and fields) and structural dependences (e.g., method-calls, field-accesses, and subtyping relationships). The benefit of this approach is demonstrated through its application to several open source projects as well as a focus group study with professional software engineers from a large e-commerce company.",
"title": ""
},
{
"docid": "124f40ccd178e6284cc66b88da98709d",
"text": "The tripeptide glutathione is the thiol compound present in the highest concentration in cells of all organs. Glutathione has many physiological functions including its involvement in the defense against reactive oxygen species. The cells of the human brain consume about 20% of the oxygen utilized by the body but constitute only 2% of the body weight. Consequently, reactive oxygen species which are continuously generated during oxidative metabolism will be generated in high rates within the brain. Therefore, the detoxification of reactive oxygen species is an essential task within the brain and the involvement of the antioxidant glutathione in such processes is very important. The main focus of this review article will be recent results on glutathione metabolism of different brain cell types in culture. The glutathione content of brain cells depends strongly on the availability of precursors for glutathione. Different types of brain cells prefer different extracellular glutathione precursors. Glutathione is involved in the disposal of peroxides by brain cells and in the protection against reactive oxygen species. In coculture astroglial cells protect other neural cell types against the toxicity of various compounds. One mechanism for this interaction is the supply by astroglial cells of glutathione precursors to neighboring cells. Recent results confirm the prominent role of astrocytes in glutathione metabolism and the defense against reactive oxygen species in brain. These results also suggest an involvement of a compromised astroglial glutathione system in the oxidative stress reported for neurological disorders.",
"title": ""
},
{
"docid": "f177b129e4a02fe42084563a469dc47d",
"text": "This paper proposes three design concepts for developing a crawling robot inspired by an inchworm, called the Omegabot. First, for locomotion, the robot strides by bending its body into an omega shape; anisotropic friction pads enable the robot to move forward using this simple motion. Second, the robot body is made of a single part but has two four-bar mechanisms and one spherical six-bar mechanism; the mechanisms are 2-D patterned into a single piece of composite and folded to become a robot body that weighs less than 1 g and that can crawl and steer. This design does not require the assembly of various mechanisms of the body structure, thereby simplifying the fabrication process. Third, a new concept for using a shape-memory alloy (SMA) coil-spring actuator is proposed; the coil spring is designed to have a large spring index and to work over a large pitch-angle range. This large-index-and-pitch SMA spring actuator cools faster and requires less energy, without compromising the amount of force and displacement that it can produce. Therefore, the frequency and the efficiency of the actuator are improved. A prototype was used to demonstrate that the inchworm-inspired, novel, small-scale, lightweight robot manufactured on a single piece of composite can crawl and steer.",
"title": ""
},
{
"docid": "016eca10ff7616958ab8f55af71cf5d7",
"text": "This paper is concerned with the problem of adaptive fault-tolerant synchronization control of a class of complex dynamical networks (CDNs) with actuator faults and unknown coupling weights. The considered input distribution matrix is assumed to be an arbitrary matrix, instead of a unit one. Within this framework, an adaptive fault-tolerant controller is designed to achieve synchronization for the CDN. Moreover, a convex combination technique and an important graph theory result are developed, such that the rigorous convergence analysis of synchronization errors can be conducted. In particular, it is shown that the proposed fault-tolerant synchronization control approach is valid for the CDN with both time-invariant and time-varying coupling weights. Finally, two simulation examples are provided to validate the effectiveness of the theoretical results.",
"title": ""
},
{
"docid": "b27e10bd1491cf59daff0b8cd38e60e5",
"text": "........................................................................................................................................................ i",
"title": ""
},
{
"docid": "62a1749f03a7f95b25983545b80b6cf7",
"text": "To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit. This creates cubically many parameters, which form a three-dimensional interaction tensor. We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product. This approximation allows efficient learning of transformations between larger image patches. Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations. We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences. We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans.",
"title": ""
},
{
"docid": "1bf01c4ffe40365f093ef89af4c3610d",
"text": "User behaviour analysis based on traffic log in wireless networks can be beneficial to many fields in real life: not only for commercial purposes, but also for improving network service quality and social management. We cluster users into groups marked by the most frequently visited websites to find their preferences. In this paper, we propose a user behaviour model based on Topic Model from document classification problems. We use the logarithmic TF-IDF (term frequency - inverse document frequency) weighing to form a high-dimensional sparse feature matrix. Then we apply LSA (Latent semantic analysis) to deduce the latent topic distribution and generate a low-dimensional dense feature matrix. K-means++, which is a classic clustering algorithm, is then applied to the dense feature matrix and several interpretable user clusters are found. Moreover, by combining the clustering results with additional demographical information, including age, gender, and financial information, we are able to uncover more realistic implications from the clustering results.",
"title": ""
},
{
"docid": "f8d10e75cef35a7fbf5477d4b0cd1288",
"text": "We present the development on an ultra-wideband (UWB) radar system and its signal processing algorithms for detecting human breathing and heartbeat in the paper. The UWB radar system consists of two (Tx and Rx) antennas and one compact CMOS UWB transceiver. Several signal processing techniques are developed for the application. The system has been tested by real measurements.",
"title": ""
},
{
"docid": "f61f67772aa4a54b8c20b76d15d1007a",
"text": "The Internet is a great discovery for ordinary citizens correspondence. People with criminal personality have found a method for taking individual data without really meeting them and with minimal danger of being gotten. It is called Phishing. Phishing represents a huge threat to the web based business industry. Not just does it smash the certainty of clients towards online business, additionally causes electronic administration suppliers colossal financial misfortune. Subsequently, it is fundamental to think about phishing. This paper gives mindfulness about Phishing assaults and hostile to phishing apparatuses.",
"title": ""
},
{
"docid": "61406f27199acc5f034c2721d66cda89",
"text": "Fischler PER •Sequence of tokens mapped to word embeddings. •Bidirectional LSTM builds context-dependent representations for each word. •A small feedforward layer encourages generalisation. •Conditional Random Field (CRF) at the top outputs the most optimal label sequence for the sentence. •Using character-based dynamic embeddings (Rei et al., 2016) to capture morphological patterns and unseen words.",
"title": ""
},
{
"docid": "8620c228a0a686788b53d9c766b5b6bf",
"text": "Projects combining agile methods with CMMI combine adaptability with predictability to better serve large customer needs. The introduction of Scrum at Systematic, a CMMI Level 5 company, doubled productivity and cut defects by 40% compared to waterfall projects in 2006 by focusing on early testing and time to fix builds. Systematic institutionalized Scrum across all projects and used data driven tools like story process efficiency to surface Product Backlog impediments. This allowed them to systematically develop a strategy for a second doubling in productivity. Two teams have achieved a sustainable quadrupling of productivity compared to waterfall projects. We discuss here the strategy to bring the entire company to that level. Our experiences shows that Scrum and CMMI together bring a more powerful combination of adaptability and predictability than either one alone and suggest how other companies can combine them to achieve Toyota level performance – 4 times the productivity and 12 times the quality of waterfall teams.",
"title": ""
},
{
"docid": "64e5cad1b64f1412b406adddc98cd421",
"text": "We examine the influence of venture capital on patented inventions in the United States across twenty industries over three decades. We address concerns about causality in several ways, including exploiting a 1979 policy shift that spurred venture capital fundraising. We find that increases in venture capital activity in an industry are associated with significantly higher patenting rates. While the ratio of venture capital to R&D averaged less than 3% from 1983–1992, our estimates suggest that venture capital may have accounted for 8% of industrial innovations in that period.",
"title": ""
},
{
"docid": "a53904f277c06e32bd6ad148399443c6",
"text": "Big data is flowing into every area of our life, professional and personal. Big data is defined as datasets whose size is beyond the ability of typical software tools to capture, store, manage and analyze, due to the time and memory complexity. Velocity is one of the main properties of big data. In this demo, we present SAMOA (Scalable Advanced Massive Online Analysis), an open-source platform for mining big data streams. It provides a collection of distributed streaming algorithms for the most common data mining and machine learning tasks such as classification, clustering, and regression, as well as programming abstractions to develop new algorithms. It features a pluggable architecture that allows it to run on several distributed stream processing engines such as Storm, S4, and Samza. SAMOA is written in Java and is available at http://samoa-project.net under the Apache Software License version 2.0.",
"title": ""
},
{
"docid": "2fa7d2f8a423c5d3ce53db0c964dcc76",
"text": "In recent years, archaeal diversity surveys have received increasing attention. Brazil is a country known for its natural diversity and variety of biomes, which makes it an interesting sampling site for such studies. However, archaeal communities in natural and impacted Brazilian environments have only recently been investigated. In this review, based on a search on the PubMed database on the last week of April 2016, we present and discuss the results obtained in the 51 studies retrieved, focusing on archaeal communities in water, sediments, and soils of different Brazilian environments. We concluded that, in spite of its vast territory and biomes, the number of publications focusing on archaeal detection and/or characterization in Brazil is still incipient, indicating that these environments still represent a great potential to be explored.",
"title": ""
}
] |
scidocsrr
|
2577cdc082a2d03bd66bf2e56128a68b
|
Making Learning and Web 2.0 Technologies Work for Higher Learning Institutions in Africa
|
[
{
"docid": "b9e7fedbc42f815b35351ec9a0c31b33",
"text": "Proponents have marketed e-learning by focusing on its adoption as the right thing to do while disregarding, among other things, the concerns of the potential users, the adverse effects on users and the existing research on the use of e-learning or related innovations. In this paper, the e-learning-adoption proponents are referred to as the technopositivists. It is argued that most of the technopositivists in the higher education context are driven by a personal agenda, with the aim of propagating a technopositivist ideology to stakeholders. The technopositivist ideology is defined as a ‘compulsive enthusiasm’ about e-learning in higher education that is being created, propagated and channelled repeatedly by the people who are set to gain without giving the educators the time and opportunity to explore the dangers and rewards of e-learning on teaching and learning. Ten myths on e-learning that the technopositivists have used are presented with the aim of initiating effective and constructive dialogue, rather than merely criticising the efforts being made. Introduction The use of technology, and in particular e-learning, in higher education is becoming increasingly popular. However, Guri-Rosenblit (2005) and Robertson (2003) propose that educational institutions should step back and reflect on critical questions regarding the use of technology in teaching and learning. The focus of Guri-Rosenblit’s article is on diverse issues of e-learning implementation in higher education, while Robertson focuses on the teacher. Both papers show that there is a change in the ‘euphoria towards eLearning’ and that a dose of techno-negativity or techno-scepticism is required so that the gap between rhetoric in the literature (with all the promises) and actual implementation can be bridged for an informed stance towards e-learning adoption. British Journal of Educational Technology Vol 41 No 2 2010 199–212 doi:10.1111/j.1467-8535.2008.00910.x © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Technology in teaching and learning has been marketed or presented to its intended market with a lot of promises, benefits and opportunities. This technopositivist ideology has denied educators and educational researchers the much needed opportunities to explore the motives, power, rewards and sanctions of information and communication technologies (ICTs), as well as time to study the impacts of the new technologies on learning and teaching. Educational research cannot cope with the speed at which technology is advancing (Guri-Rosenblit, 2005; Robertson, 2003; Van Dusen, 1998; Watson, 2001). Indeed there has been no clear distinction between teaching with and teaching about technology and therefore the relevance of such studies has not been brought to the fore. Much of the focus is on the actual educational technology as it advances, rather than its educational functions or the effects it has on the functions of teaching and learning. The teaching profession has been affected by the implementation and use of ICT through these optimistic views, and the ever-changing teaching and learning culture (Kompf, 2005; Robertson, 2003). It is therefore necessary to pause and ask the question to the technopositivist ideologists: whether in e-learning the focus is on the ‘e’ or on the learning. The opportunities and dangers brought about by the ‘e’ in e-learning should be soberly examined. As Gandolfo (1998, p. 24) suggests: [U]ndoubtedly, there is opportunity; the effective use of technology has the potential to improve and enhance learning. Just as assuredly there is the danger that the wrong headed adoption of various technologies apart from a sound grounding in educational research and practice will result, and indeed in some instances has already resulted, in costly additions to an already expensive enterprise without any value added. That is, technology applications must be consonant with what is known about the nature of learning and must be assessed to ensure that they are indeed enhancing learners’ experiences. Technopositivist ideology is a ‘compulsory enthusiasm’ about technology that is being created, propagated and channelled repeatedly by the people who stand to gain either economically, socially, politically or otherwise in due disregard of the trade-offs associated with the technology to the target audience (Kompf, 2005; Robertson, 2003). In e-learning, the beneficiaries of the technopositivist market are doing so by presenting it with promises that would dismiss the judgement of many. This is aptly illustrated by Robertson (2003, pp. 284–285): Information technology promises to deliver more (and more important) learning for every student accomplished in less time; to ensure ‘individualization’ no matter how large and diverse the class; to obliterate the differences and disadvantages associated with race, gender, and class; to vary and yet standardize the curriculum; to remove subjectivity from student evaluation; to make reporting and record keeping a snap; to keep discipline problems to a minimum; to enhance professional learning and discourse; and to transform the discredited teacher-centered classroom into that paean of pedagogy: the constructivist, student-centered classroom, On her part, Guri-Rosenblit (2005, p. 14) argues that the proponents and marketers of e-learning present it as offering multiple uses that do not have a clear relationship with a current or future problem. She asks two ironic, vital and relevant questions: ‘If it ain’t broken, why fix it?’ and ‘Technology is the answer—but what are the questions?’ The enthusiasm to use technology for endless possibilities has led to the belief that providing 200 British Journal of Educational Technology Vol 41 No 2 2010 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. information automatically leads to meaningful knowledge creation; hence blurring and confusing the distinction between information and knowledge. This is one of the many misconceptions that emerged with e-learning. There has been a great deal of confusion both in the marketing of and language used in the advocating of the ICTs in teaching and learning. As an example, Guri-Rosenblit (2005, p. 6) identified a list of 15 words used to describe the environment for teaching and learning with technology from various studies: ‘web-based learning, computermediated instruction, virtual classrooms, online education, e-learning, e-education, computer-driven interactive communication, open and distance learning, I-Campus, borderless education, cyberspace learning environments, distributed learning, flexible learning, blended learning, mobile-learning’. The list could easily be extended with many more words. Presented with this array of words, most educators are not sure of what e-learning is. Could it be synonymous to distance education? Is it just the use of online tools to enhance or enrich the learning experiences? Is it stashing the whole courseware or parts of it online for students to access? Or is it a new form of collaborative or cooperative learning? Clearly, any of these questions could be used to describe an aspect of e-learning and quite often confuse the uninformed educator. These varied words, with as many definitions, show the degree to which e-learning is being used in different cultures and in different organisations. Unfortunately, many of these uses are based on popular assumptions and myths. While the myths that will be discussed in this paper are generic, and hence applicable to e-learning use in most cultures and organisations, the paper’s focus is on higher education, because it forms part of a larger e-learning research project among higher education institutions (HEIs) and also because of the popularity of e-learning use in HEIs. Although there is considerable confusion around the term e-learning, for the purpose of this paper it will be considered as referring to the use of electronic technology and content in teaching and learning. It includes, but is not limited to, the use of the Internet; television; streaming video and video conferencing; online text and multimedia; and mobile technologies. From the nomenclature, also comes the crafting of the language for selling the technologies to the educators. Robertson (2003, p. 280) shows the meticulous choice of words by the marketers where ‘research’ is transformed into a ‘belief system’ and the past tense (used to communicate research findings) is substituted for the present and future tense, for example “Technology ‘can and will’ rather than ‘has and does’ ” in a quote from Apple’s comment: ‘At Apple, we believe the effective integration of technology into classroom instruction can and will result in higher levels of student achievement’. Similar quotes are available in the market and vendors of technology products for teaching and learning. This, however, is not limited to the market; some researchers have used similar quotes: ‘It is now conventional wisdom that those countries which fail to move from the industrial to the Information Society will not be able to compete in the globalised market system made possible by the new technologies’ (Mac Keogh, 2001, p. 223). The role of research should be to question the conventional wisdom or common sense and offer plausible answers, rather than dancing to the fine tunes of popular or mass e-Learning myths 201 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. wisdom. It is also interesting to note that Mac Keogh (2001, p. 233) concludes that ‘[w]hen issues other than costs and performance outcomes are considered, the rationale for introducing ICTs in education is more powerful’. Does this mean that irrespective of whether ICTs ",
"title": ""
}
] |
[
{
"docid": "90d33a2476534e542e2722d7dfa26c91",
"text": "Despite some notable and rare exceptions and after many years of relatively neglect (particularly in the ‘upper echelons’ of IS research), there appears to be some renewed interest in Information Systems Ethics (ISE). This paper reflects on the development of ISE by assessing the use and development of ethical theory in contemporary IS research with a specific focus on the ‘leading’ IS journals (according to the Association of Information Systems). The focus of this research is to evaluate if previous calls for more theoretically informed work are permeating the ‘upper echelons’ of IS research and if so, how (Walsham 1996; Smith and Hasnas 1999; Bell and Adam 2004). For the purposes of scope, this paper follows on from those previous studies and presents a detailed review of the leading IS publications between 2005to2007 inclusive. After several processes, a total of 32 papers are evaluated. This review highlights that whilst ethical topics are becoming increasingly popular in such influential media, most of the research continues to neglect considerations of ethical theory with preferences for a range of alternative approaches. Finally, this research focuses on some of the papers produced and considers how the use of ethical theory could contribute.",
"title": ""
},
{
"docid": "ed176e79496053f1c4fdee430d1aa7fc",
"text": "Event recognition systems rely on knowledge bases of event definitions to infer occurrences of events in time. Using a logical framework for representing and reasoning about events offers direct connections to machine learning, via Inductive Logic Programming (ILP), thus allowing to avoid the tedious and error-prone task of manual knowledge construction. However, learning temporal logical formalisms, which are typically utilized by logic-based event recognition systems is a challenging task, which most ILP systems cannot fully undertake. In addition, event-based data is usually massive and collected at different times and under various circumstances. Ideally, systems that learn from temporal data should be able to operate in an incremental mode, that is, revise prior constructed knowledge in the face of new evidence. In this work we present an incremental method for learning and revising event-based knowledge, in the form of Event Calculus programs. The proposed algorithm relies on abductive–inductive learning and comprises a scalable clause refinement methodology, based on a compressive summarization of clause coverage in a stream of examples. We present an empirical evaluation of our approach on real and synthetic data from activity recognition and city transport applications.",
"title": ""
},
{
"docid": "ab2c4d5317d2e10450513283c21ca6d3",
"text": "We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don’t match between call logs and address book entries on the same phone.",
"title": ""
},
{
"docid": "90563706ada80e880b7fcf25489f9b27",
"text": "We describe the large vocabulary automatic speech recognition system developed for Modern Standard Arabic by the SRI/Nightingale team, and used for the 2007 GALE evaluation as part of the speech translation system. We show how system performance is affected by different development choices, ranging from text processing and lexicon to decoding system architecture design. Word error rate results are reported on broadcast news and conversational data from the GALE development and evaluation test sets.",
"title": ""
},
{
"docid": "1bc33dcf86871e70bd3b7856fd3c3857",
"text": "A framework for clustered-dot color halftone watermarking is proposed. Watermark patterns are embedded in the color halftone on per-separation basis. For typical CMYK printing systems, common desktop RGB color scanners are unable to provide the individual colorant halftone separations, which confounds per-separation detection methods. Not only does the K colorant consistently appear in the scanner channels as it absorbs uniformly across the spectrum, but cross-couplings between CMY separations are also observed in the scanner color channels due to unwanted absorptions. We demonstrate that by exploiting spatial frequency and color separability of clustered-dot color halftones, estimates of the individual colorant halftone separations can be obtained from scanned RGB images. These estimates, though not perfect, allow per-separation detection to operate efficiently. The efficacy of this methodology is demonstrated using continuous phase modulation for the embedding of per-separation watermarks.",
"title": ""
},
{
"docid": "0c88535a3696fe9e2c82f8488b577284",
"text": "Touch gestures can be a very important aspect when developing mobile applications with enhanced reality. The main purpose of this research was to determine which touch gestures were most frequently used by engineering students when using a simulation of a projectile motion in a mobile AR applica‐ tion. A randomized experimental design was given to students, and the results showed the most commonly used gestures to visualize are: zoom in “pinch open”, zoom out “pinch closed”, move “drag” and spin “rotate”.",
"title": ""
},
{
"docid": "04e9383039f64bf5ef90e59ba451e45f",
"text": "The current generation of manufacturing systems relies on monolithic control software which provides real-time guarantees but is hard to adapt and reuse. These qualities are becoming increasingly important for meeting the demands of a global economy. Ongoing research and industrial efforts therefore focus on service-oriented architectures (SOA) to increase the control software’s flexibility while reducing development time, effort and cost. With such encapsulated functionality, system behavior can be expressed in terms of operations on data and the flow of data between operators. In this thesis we consider industrial real-time systems from the perspective of distributed data processing systems. Data processing systems often must be highly flexible, which can be achieved by a declarative specification of system behavior. In such systems, a user expresses the properties of an acceptable solution while the system determines a suitable execution plan that meets these requirements. Applied to the real-time control domain, this means that the user defines an abstract workflow model with global timing constraints from which the system derives an execution plan that takes the underlying system environment into account. The generation of a suitable execution plan often is NP-hard and many data processing systems rely on heuristic solutions to quickly generate high quality plans. We utilize heuristics for finding real-time execution plans. Our evaluation shows that heuristics were successful in finding a feasible execution plan in 99% of the examined test cases. Lastly, data processing systems are engineered for an efficient exchange of data and therefore are usually built around a direct data flow between the operators without a mediating entity in between. Applied to SOA-based automation, the same principle is realized through service choreographies with direct communication between the individual services instead of employing a service orchestrator which manages the invocation of all services participating in a workflow. These three principles outline the main contributions of this thesis: A flexible reconfiguration of SOA-based manufacturing systems with verifiable real-time guarantees, fast heuristics based planning, and a peer-to-peer execution model for SOAs with clear semantics. We demonstrate these principles within a demonstrator that is close to a real-world industrial system.",
"title": ""
},
{
"docid": "ad6dc9f74e0fa3c544c4123f50812e14",
"text": "An ultra-wideband transition from microstrip to stripline in PCB technology is presented applying only through via holes for simple fabrication. The design is optimized using full-wave EM simulations. A prototype is manufactured and measured achieving a return loss better than 8.7dB and an insertion loss better than 1.2 dB in the FCC frequency range. A meander-shaped delay line in stripline technique is presented as an example of application.",
"title": ""
},
{
"docid": "0382ad43b6d31a347d9826194a7261ce",
"text": "In this paper, we present a representation for three-dimensional geometric animation sequences. Different from standard key-frame techniques, this approach is based on the determination of principal animation components and decouples the animation from the underlying geometry. The new representation supports progressive animation compression with spatial, as well as temporal, level-of-detail and high compression ratios. The distinction of animation and geometry allows for mapping animations onto other objects.",
"title": ""
},
{
"docid": "ed282d88b5f329490f390372c502f238",
"text": "Extracting opinion expressions from text is an essential task of sentiment analysis, which is usually treated as one of the word-level sequence labeling problems. In such problems, compositional models with multiplicative gating operations provide efficient ways to encode the contexts, as well as to choose critical information. Thus, in this paper, we adopt Long Short-Term Memory (LSTM) recurrent neural networks to address the task of opinion expression extraction and explore the internal mechanisms of the model. The proposed approach is evaluated on the Multi-Perspective Question Answering (MPQA) opinion corpus. The experimental results demonstrate improvement over previous approaches, including the state-of-the-art method based on simple recurrent neural networks. We also provide a novel micro perspective to analyze the run-time processes and gain new insights into the advantages of LSTM selecting the source of information with its flexible connections and multiplicative gating operations.",
"title": ""
},
{
"docid": "e87617852de3ce25e1955caf1f4c7a21",
"text": "Edges characterize boundaries and are therefore a problem of fundamental importance in image processing. Image Edge detection significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image. Since edge detection is in the forefront of image processing for object detection, it is crucial to have a good understanding of edge detection algorithms. In this paper the comparative analysis of various Image Edge Detection techniques is presented. The software is developed using MATLAB 7.0. It has been shown that the Canny s edge detection algorithm performs better than all these operators under almost all scenarios. Evaluation of the images showed that under noisy conditions Canny, LoG( Laplacian of Gaussian), Robert, Prewitt, Sobel exhibit better performance, respectively. . It has been observed that Canny s edge detection algorithm is computationally more expensive compared to LoG( Laplacian of Gaussian), Sobel, Prewitt and Robert s operator CITED BY (354) 1 Gra a, R. F. P. S. O. (2012). Segmenta o de imagens tor cicas de Raio-X (Doctoral dissertation, UNIVERSIDADE DA BEIRA INTERIOR). 2 ZENDI, M., & YILMAZ, A. (2013). DEGISIK BAKIS A ILARINDAN ELDE EDILEN G R NT GRUPLARININ SINIFLANDIRILMASI. Journal of Aeronautics & Space Technolog ies/Havacilik ve Uzay Teknolojileri Derg is i, 6(1). 3 TROFINO, A. F. N. (2014). TRABALHO DE CONCLUS O DE CURSO. 4 Juan Albarrac n, J. (2011). Dise o, an lis is y optimizaci n de un s istema de reconocimiento de im genes basadas en contenido para imagen publicitaria (Doctoral dissertation). 5 Bergues, G., Ames, G., Canali, L., Schurrer, C., & Fles ia, A. G. (2014, June). Detecci n de l neas en im genes con ruido en un entorno de medici n de alta precis i n. In Biennial Congress of Argentina (ARGENCON), 2014 IEEE (pp. 582-587). IEEE. 6 Andrianto, D. S. (2013). Analisa Statistik terhadap perubahan beberapa faktor deteksi kemacetan melalui pemrosesan video beserta peng iriman notifikas i kemacetan. Jurnal Sarjana ITB bidang Teknik Elektro dan Informatika, 2(1). 7 Pier g , M., & Jaskowiec, J. Identyfikacja twarzy z wykorzystaniem Sztucznych Sieci Neuronowych oraz PCA. 8 Nugraha, K. A., Santoso, A. J., & Suselo, T. (2015, July). ALGORITMA BACKPROPAGATION PADA JARINGAN SARAF TIRUAN UNTUK PENGENALAN POLA WAYANG KULIT. In Seminar Nasional Informatika 2008 (Vol. 1, No. 4). 9 Cornet, T. (2012). Formation et D veloppement des Lacs de Titan: Interpr tation G omorpholog ique d'Ontario Lacus et Analogues Terrestres (Doctoral dissertation, Ecole Centrale de Nantes (ECN)(ECN)(ECN)(ECN)). 10 Li, L., Sun, L., Ning , G., & Tan, S. (2014). Automatic Pavement Crack Recognition Based on BP Neural Network. PROMET-Traffic&Transportation, 26(1), 11-22. 11 Quang Hong , N., Khanh Quoc, D., Viet Anh, N., Chien Van, T., ???, & ???. (2015). Rate Allocation for Block-based Compressive Sensing . Journal of Broadcast Eng ineering , 20(3), 398-407. 12 Swillo, S. (2013). Zastosowanie techniki wizyjnej w automatyzacji pomiar w geometrii i podnoszeniu jakosci wyrob w wytwarzanych w przemysle motoryzacyjnym. Prace Naukowe Politechniki Warszawskiej. Mechanika, (257), 3-128. 13 V zina, M. (2014). D veloppement de log iciels de thermographie infrarouge visant am liorer le contr le de la qualit de la pose de l enrob bitumineux. 14 Decourselle, T. (2014). Etude et mod lisation du comportement des gouttelettes de produits phytosanitaires sur les feuilles de vigne par imagerie ultra-rapide et analyse de texture (Doctoral dissertation, Univers it de Bourgogne). 15 Reja, I. D., & Santoso, A. J. (2013). Pengenalan Motif Sarung (Utan Maumere) Menggunakan Deteksi Tepi. Semantik, 3(1). 16 Feng , Y., & Chen, F. (2013). Fast volume measurement algorithm based on image edge detection. Journal of Computer Applications, 6, 064. 17 Krawczuk, A., & Dominczuk, J. (2014). The use of computer image analys is in determining adhesion properties . Applied Computer Science, 10(3), 68-77. 18 Hui, L., Park, M. W., & Brilakis , I. (2014). Automated Brick Counting for Fa ade Construction Progress Estimation. Journal of Computing in Civil Eng ineering , 04014091. 19 Mahmud, S., Mohammed, J., & Muaidi, H. (2014). A Survey of Dig ital Image Processing Techniques in Character Recognition. IJCSNS, 14(3), 65. 20 Yazdanparast, E., Dos Anjos , A., Garcia, D., Loeuillet, C., Shahbazkia, H. R., & Vergnes, B. (2014). INsPECT, an Open-Source and Versatile Software for Automated Quantification of (Leishmania) Intracellular Parasites . 21 Furtado, L. F. F., Trabasso, L. G., Villani, E., & Francisco, A. (2012, December). Temporal filter applied to image sequences acquired by an industrial robot to detect defects in large aluminum surfaces areas. In MECHATRONIKA, 2012 15th International Symposium (pp. 1-6). IEEE. 22 Zhang , X. H., Li, G., Li, C. L., Zhang , H., Zhao, J., & Hou, Z. X. (2015). Stereo Matching Algorithm Based on 2D Delaunay Triangulation. Mathematical Problems in Eng ineering , 501, 137193. 23 Hasan, H. M. Image Based Vehicle Traffic Measurement. 24 Taneja, N. PERFORMANCE EVALUATION OF IMAGE SEGMENTATION TECHNIQUES USED FOR QUALITATIVE ANALYSIS OF MEMBRANE FILTER. 25 Mathur, A., & Mathur, R. (2013). Content Based Image Retrieval by Multi Features us ing Image Blocks. International Journal of Advanced Computer Research, 3(4), 251. 26 Pandey, A., Pant, D., & Gupta, K. K. (2013). A Novel Approach on Color Image Refocusing and Defocusing . International Journal of Computer Applications, 73(3), 13-17. 27 S le, I. (2014). The determination of the twist level of the Chenille yarn using novel image processing methods: Extraction of axial grey-level characteristic and multi-step gradient based thresholding . Dig ital Signal Processing , 29, 78-99. 28 Azzabi, T., Amor, S. B., & Nejim, S. (2014, November). Obstacle detection for Unmanned Surface Vehicle. In Electrical Sciences and Technolog ies in Maghreb (CISTEM), 2014 International Conference on (pp. 1-7). IEEE. 29 Zacharia, K., Elias , E. P., & Varghese, S. M. (2012). Personalised product design using virtual interactive techniques. arXiv preprint arXiv:1202.1808. 30 Kim, J. H., & Lattimer, B. Y. (2015). Real-time probabilis tic class ification of fire and smoke using thermal imagery for intelligent firefighting robot. Fire Safety Journal, 72, 40-49. 31 N ez, J. M. Edge detection for Very High Resolution Satellite Imagery based on Cellular Neural Network. Advances in Pattern Recognition, 55. 32 Capobianco, J., Pallone, G., & Daudet, L. (2012, October). Low Complexity Transient Detection in Audio Coding Using an Image Edge Detection Approach. In Audio Eng ineering Society Convention 133. Audio Eng ineering Society. 33 zt rk, S., & Akdemir, B. (2015). Comparison of Edge Detection Algorithms for Texture Analys is on Glass Production. Procedia-Social and Behavioral Sciences, 195, 2675-2682. 34 Ahmed, A. M., & Elramly, S. Hyperspectral Data Compression Based On Weighted Prediction. 35 Jayas, D. S. A. Manickavasagan, HN Al-Shekaili, G. Thomas, MS Rahman, N. Guizani &. 36 Khashu, S., Vijayanagar, S., Manikantan, K., & Ramachandran, S. (2014, February). Face Recognition using Dual Wavelet Transform and Filter-Transformed Flipping . In Electronics and Communication Systems (ICECS), 2014 International Conference on (pp. 1-7). IEEE. 37 Brown, R. C. (2014). IRIS: Intelligent Roadway Image Segmentation using an Adaptive Reg ion of Interest (Doctoral dissertation, Virg inia Polytechnic Institute and State Univers ity). 38 Huang , L., Zuo, X., Fang , Y., & Yu, X. A Segmentation Algorithm for Remote Sensing Imag ing Based on Edge and Heterogeneity of Objects . 39 Park, J., Kim, Y., & Kim, S. (2015). Landing Site Searching and Selection Algorithm Development Using Vis ion System and Its Application to Quadrotor. Control Systems Technology, IEEE Transactions on, 23(2), 488-503. 40 Sikchi, P., Beknalkar, N., & Rane, S. Real-Time Cartoonization Using Raspberry Pi. 41 Bachmakov, E., Molina, C., & Wynne, R. (2014, March). Image-based spectroscopy for environmental monitoring . In SPIE Smart Structures and Materials+ Nondestructive Evaluation and Health Monitoring (pp. 90620B-90620B). International Society for Optics and Photonics . 42 Kulyukin, V., & Zaman, T. (2014). Vis ion-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxed Pitch, Roll, and Yaw Camera Alignment Constraints . International Journal of Image Processing (IJIP), 8(5), 355. 43 Sandhu, E. M. S., Mutneja, E. V., & Nishi, E. Image Edge Detection by Using Rule Based Fuzzy Class ifier. 44 Tarwani, K. M., & Bhoyar, K. K. Approaches to Gender Class ification using Facial Images. 45 Kuppili, S. K., & Prasad, P. M. K. (2015). Design of Area Optimized Sobel Edge Detection. In Computational Intelligence in Data Mining-Volume 2 (pp. 647-655). Springer India. 46 Singh, R. K., Shaw, D. K., & Alam, M. J. (2015). Experimental Studies of LSB Watermarking with Different Noise. Procedia Computer Science, 54, 612-620. 47 Xu, Y., Da-qiao, Z., Da-wei, D., Bo, W., & Chao-nan, T. (2014, July). A speed monitoring method in steel pipe of 3PE-coating process based on industrial Charge-coupled Device. In Control Conference (CCC), 2014 33rd Chinese (pp. 2908-2912). IEEE. 48 Yasiran, S. S., Jumaat, A. K., Malek, A. A., Hashim, F. H., Nasrir, N., Hassan, S. N. A. S., ... & Mahmud, R. (1987). Microcalcifications Segmentation using Three Edge Detection Techniques on Mammogram Images. 49 Roslan, N., Reba, M. N. M., Askari, M., & Halim, M. K. A. (2014, February). Linear and non-linear enhancement for sun g lint reduction in advanced very high resolution radiometer (AVHRR) image. In IOP Conference Series : Earth and Environmental Science (Vol. 18, No. 1, p. 012041). IOP Publishing . 50 Gupta, P. K. D., Pattnaik, S., & Nayak, M. (2014). Inter-level Spatial Cloud Compression Algorithm. Defence Science Journal, 64(6), 536-541. 51 Foster, R. (2015). A comparison of machine learning techniques for hand shape recogn",
"title": ""
},
{
"docid": "b2e493de6e09766c4ddbac7de071e547",
"text": "In this paper we describe and evaluate some recently innovated coupling metrics for object oriented OO design The Coupling Between Objects CBO metric of Chidamber and Kemerer C K are evaluated empirically using ve OO systems and compared with an alternative OO design metric called NAS which measures the Number of Associations between a class and its peers The NAS metric is directly collectible from design documents such as the Object Model of OMT Results from all systems studied indicate a strong relationship between CBO and NAS suggesting that they are not orthogonal We hypothesised that coupling would be related to understandability the number of errors and error density No relationships were found for any of the systems between class understandability and coupling However we did nd partial support for our hypothesis linking increased coupling to increased error density The work described in this paper is part of the Metrics for OO Programming Systems MOOPS project which aims are to evaluate existing OO metrics and to innovate and evaluate new OO analysis and design metrics aimed speci cally at the early stages of development",
"title": ""
},
{
"docid": "49f21df66ac901e5f37cff022353ed20",
"text": "This paper presents the implementation of the interval type-2 to control the process of production of High-strength low-alloy (HSLA) steel in a secondary metallurgy process in a simply way. The proposal evaluate fuzzy techniques to ensure the accuracy of the model, the most important advantage is that the systems do not need pretreatment of the historical data, it is used as it is. The system is a multiple input single output (MISO) and the main goal of this paper is the proposal of a system that optimizes the resources: computational, time, among others.",
"title": ""
},
{
"docid": "c070020d88fb77f768efa5f5ac2eb343",
"text": "This paper provides a critical overview of the theoretical, analytical, and practical questions most prevalent in the study of the structural and the sociolinguistic dimensions of code-switching (CS). In doing so, it reviews a range of empirical studies from around the world. The paper first looks at the linguistic research on the structural features of CS focusing in particular on the code-switching versus borrowing distinction, and the syntactic constraints governing its operation. It then critically reviews sociological, anthropological, and linguistic perspectives dominating the sociolinguistic research on CS over the past three decades. Major empirical studies on the discourse functions of CS are discussed, noting the similarities and differences between socially motivated CS and style-shifting. Finally, directions for future research on CS are discussed, giving particular emphasis to the methodological issue of its applicability to the analysis of bilingual classroom interaction.",
"title": ""
},
{
"docid": "77796f30d8d1604c459fb3f3fe841515",
"text": "The overall focus of this research is to demonstrate the savings potential generated by the integration of the design of strategic global supply chain networks with the determination of tactical production–distribution allocations and transfer prices. The logistics systems design problem is defined as follows: given a set of potential suppliers, potential manufacturing facilities, and distribution centers with multiple possible configurations, and customers with deterministic demands, determine the configuration of the production–distribution system and the transfer prices between various subsidiaries of the corporation such that seasonal customer demands and service requirements are met and the after tax profit of the corporation is maximized. The after tax profit is the difference between the sales revenue minus the total system cost and taxes. The total cost is defined as the sum of supply, production, transportation, inventory, and facility costs. Two models and their associated solution algorithms will be introduced. The savings opportunities created by designing the system with a methodology that integrates strategic and tactical decisions rather than in a hierarchical fashion are demonstrated with two case studies. The first model focuses on the setting of transfer prices in a global supply chain with the objective of maximizing the after tax profit of an international corporation. The constraints mandated by the national taxing authorities create a bilinear programming formulation. We will describe a very efficient heuristic iterative solution algorithm, which alternates between the optimization of the transfer prices and the material flows. Performance and bounds for the heuristic algorithms will be discussed. The second model focuses on the production and distribution allocation in a single country system, when the customers have seasonal demands. This model also needs to be solved as a subproblem in the heuristic solution of the global transfer price model. The research develops an integrated design methodology based on primal decomposition methods for the mixed integer programming formulation. The primal decomposition allows a natural split of the production and transportation decisions and the research identifies the necessary information flows between the subsystems. The primal decomposition method also allows a very efficient solution algorithm for this general class of large mixed integer programming models. Data requirements and solution times will be discussed for a real life case study in the packaging industry. 2002 Elsevier Science B.V. All rights reserved. European Journal of Operational Research 143 (2002) 1–18 www.elsevier.com/locate/dsw * Corresponding author. Tel.: +1-404-894-2317; fax: +1-404-894-2301. E-mail address: [email protected] (M. Goetschalckx). 0377-2217/02/$ see front matter 2002 Elsevier Science B.V. All rights reserved. PII: S0377-2217 (02 )00142-X",
"title": ""
},
{
"docid": "885a51f55d5dfaad7a0ee0c56a64ada3",
"text": "This paper presents a new method, Minimax Tree Optimization (MMTO), to learn a heuristic evaluation function of a practical alpha-beta search program. The evaluation function may be a linear or non-linear combination of weighted features, and the weights are the parameters to be optimized. To control the search results so that the move decisions agree with the game records of human experts, a well-modeled objective function to be minimized is designed. Moreover, a numerical iterative method is used to find local minima of the objective function, and more than forty million parameters are adjusted by using a small number of hyper parameters. This method was applied to shogi, a major variant of chess in which the evaluation function must handle a larger state space than in chess. Experimental results show that the large-scale optimization of the evaluation function improves the playing strength of shogi programs, and the new method performs significantly better than other methods. Implementation of the new method in our shogi program Bonanza made substantial contributions to the program’s first-place finish in the 2013 World Computer Shogi Championship. Additionally, we present preliminary evidence of broader applicability of our method to other two-player games such as chess.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "15886d83be78940609c697b30eb73b13",
"text": "Why is corruption—the misuse of public office for private gain— perceived to be more widespread in some countries than others? Different theories associate this with particular historical and cultural traditions, levels of economic development, political institutions, and government policies. This article analyzes several indexes of “perceived corruption” compiled from business risk surveys for the 1980s and 1990s. Six arguments find support. Countries with Protestant traditions, histories of British rule, more developed economies, and (probably) higher imports were less \"corrupt\". Federal states were more \"corrupt\". While the current degree of democracy was not significant, long exposure to democracy predicted lower corruption.",
"title": ""
},
{
"docid": "9b7ff8a7dec29de5334f3de8d1a70cc3",
"text": "The paper introduces a complete offline programming toolbox for remote laser welding (RLW) which provides a semi-automated method for computing close-to-optimal robot programs. A workflow is proposed for the complete planning process, and new models and algorithms are presented for solving the optimization problems related to each step of the workflow: the sequencing of the welding tasks, path planning, workpiece placement, calculation of inverse kinematics and the robot trajectory, as well as for generating the robot program code. The paper summarizes the results of an industrial case study on the assembly of a car door using RLW technology, which illustrates the feasibility and the efficiency of the proposed approach.",
"title": ""
},
{
"docid": "1d29f224933954823228c25e5e99980e",
"text": "This study was carried out in a Turkish university with 216 undergraduate students of computer technology as respondents. The study aimed to develop a scale (UECUBS) to determine the unethical computer use behavior. A factor analysis of the related items revealed that the factors were can be divided under five headings; intellectual property, social impact, safety and quality, net integrity and information integrity. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
1ccc0bff27f008ea979adef174ec6e93
|
Authenticated Key Exchange over Bitcoin
|
[
{
"docid": "32ca9711622abd30c7c94f41b91fa3f6",
"text": "The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard and in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no subexponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves. This paper describes the ANSI X9.62 ECDSA, and discusses related security, implementation, and interoperability issues.",
"title": ""
},
{
"docid": "bc8b40babfc2f16144cdb75b749e3a90",
"text": "The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the first time a variety of interesting questions about the typical behavior of users, how they acquire and how they spend their bitcoins, the balance of bitcoins they keep in their accounts, and how they move bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.",
"title": ""
}
] |
[
{
"docid": "21031b55206dd330852b8d11e8e6a84a",
"text": "To predict the most salient regions of complex natural scenes, saliency models commonly compute several feature maps (contrast, orientation, motion...) and linearly combine them into a master saliency map. Since feature maps have different spatial distribution and amplitude dynamic ranges, determining their contributions to overall saliency remains an open problem. Most state-of-the-art models do not take time into account and give feature maps constant weights across the stimulus duration. However, visual exploration is a highly dynamic process shaped by many time-dependent factors. For instance, some systematic viewing patterns such as the center bias are known to dramatically vary across the time course of the exploration. In this paper, we use maximum likelihood and shrinkage methods to dynamically and jointly learn feature map and systematic viewing pattern weights directly from eye-tracking data recorded on videos. We show that these weights systematically vary as a function of time, and heavily depend upon the semantic visual category of the videos being processed. Our fusion method allows taking these variations into account, and outperforms other stateof-the-art fusion schemes using constant weights over time. The code, videos and eye-tracking data we used for this study are available online.",
"title": ""
},
{
"docid": "d8e7c9b871f542cd40835b131eedb60a",
"text": "Attribute-based encryption (ABE) systems allow encrypting to uncertain receivers by means of an access policy specifying the attributes that the intended receivers should possess. ABE promises to deliver fine-grained access control of encrypted data. However, when data are encrypted using an ABE scheme, key management is difficult if there is a large number of users from various backgrounds. In this paper, we elaborate ABE and propose a new versatile cryptosystem referred to as ciphertext-policy hierarchical ABE (CPHABE). In a CP-HABE scheme, the attributes are organized in a matrix and the users having higher-level attributes can delegate their access rights to the users at a lower level. These features enable a CP-HABE system to host a large number of users from different organizations by delegating keys, e.g., enabling efficient data sharing among hierarchically organized large groups. We construct a CP-HABE scheme with short ciphertexts. The scheme is proven secure in the standard model under non-interactive assumptions.",
"title": ""
},
{
"docid": "d8190669434b167500312091d1a4bf30",
"text": "Path analysis was used to test the predictive and mediational role of self-efficacy beliefs in mathematical problem solving. Results revealed that math self-efficacy was more predictive of problem solving than was math self-concept, perceived usefulness of mathematics, prior experience with mathematics, or gender (N = 350). Self-efficacy also mediated the effect of gender and prior experience on self-concept, perceived usefulness, and problem solving. Gender and prior experience influenced self-concept, perceived usefulness, and problem solving largely through the mediational role of self-efficacy. Men had higher performance, self-efficacy, and self-concept and lower anxiety, but these differences were due largely to the influence of self-efficacy, for gender had a direct effect only on self-efficacy and a prior experience variable. Results support the hypothesized role of self-efficacy in A. Bandura's (1986) social cognitive theory.",
"title": ""
},
{
"docid": "05bc787d000ecf26c8185b084f8d2498",
"text": "Recommendation system is a type of information filtering systems that recommend various objects from a vast variety and quantity of items which are of the user interest. This results in guiding an individual in personalized way to interesting or useful objects in a large space of possible options. Such systems also help many businesses to achieve more profits to sustain in their filed against their rivals. But looking at the amount of information which a business holds it becomes difficult to identify the items of user interest. Therefore personalization or user profiling is one of the challenging tasks that give access to user relevant information which can be used in solving the difficult task of classification and ranking items according to an individual’s interest. Profiling can be done in various ways such as supervised or unsupervised, individual or group profiling, distributive or and non-distributive profiling. Our focus in this paper will be on the dataset which we will use, we identify some interesting facts by using Weka Tool that can be used for recommending the items from dataset .Our aim is to present a novel technique to achieve user profiling in recommendation system. KeywordsMachine Learning; Information Retrieval; User Profiling",
"title": ""
},
{
"docid": "fa0883f4adf79c65a6c13c992ae08b3f",
"text": "Being able to keep the graph scale small while capturing the properties of the original social graph, graph sampling provides an efficient, yet inexpensive solution for social network analysis. The challenge is how to create a small, but representative sample out of the massive social graph with millions or even billions of nodes. Several sampling algorithms have been proposed in previous studies, but there lacks fair evaluation and comparison among them. In this paper, we analyze the state-of art graph sampling algorithms and evaluate their performance on some widely recognized graph properties on directed graphs using large-scale social network datasets. We evaluate not only the commonly used node degree distribution, but also clustering coefficient, which quantifies how well connected are the neighbors of a node in a graph. Through the comparison we have found that none of the algorithms is able to obtain satisfied sampling results in both of these properties, and the performance of each algorithm differs much in different kinds of datasets.",
"title": ""
},
{
"docid": "4f6c7e299b8c7e34778d5c7c10e5a034",
"text": "This study presents an online multiparameter estimation scheme for interior permanent magnet motor drives that exploits the switching ripple of finite control set (FCS) model predictive control (MPC). The combinations consist of two, three, and four parameters are analysed for observability at different operating states. Most of the combinations are rank deficient without persistent excitation (PE) of the system, e.g. by signal injection. This study shows that high frequency current ripples by MPC with FCS are sufficient to create PE in the system. This study also analyses parameter coupling in estimation that results in wrong convergence and propose a decoupling technique. The observability conditions for all the combinations are experimentally validated. Finally, a full parameter estimation along with the decoupling technique is tested at different operating conditions.",
"title": ""
},
{
"docid": "5ba721a06c17731458ef1ecb6584b311",
"text": "BACKGROUND\nPrimary and tension-free closure of a flap is often required after particular surgical procedures (e.g., guided bone regeneration). Other times, flap advancement may be desired for situations such as root coverage.\n\n\nMETHODS\nThe literature was searched for articles that addressed techniques, limitations, and complications associated with flap advancement. These articles were used as background information. In addition, reference information regarding anatomy was cited as necessary to help describe surgical procedures.\n\n\nRESULTS\nThis article describes techniques to advance mucoperiosteal flaps, which facilitate healing. Methods are presented for a variety of treatment scenarios, ranging from minor to major coronal tissue advancement. Anatomic landmarks are identified that need to be considered during surgery. In addition, management of complications associated with flap advancement is discussed.\n\n\nCONCLUSIONS\nTension-free primary closure is attainable. The technique is dependent on the extent that the flap needs to be advanced.",
"title": ""
},
{
"docid": "bb02c3a2c02cce6325fe542f006dde9c",
"text": "In this paper, we argue for a theoretical separation of the free-energy principle from Helmholtzian accounts of the predictive brain. The free-energy principle is a theoretical framework capturing the imperative for biological self-organization in information-theoretic terms. The free-energy principle has typically been connected with a Bayesian theory of predictive coding, and the latter is often taken to support a Helmholtzian theory of perception as unconscious inference. If our interpretation is right, however, a Helmholtzian view of perception is incompatible with Bayesian predictive coding under the free-energy principle. We argue that the free energy principle and the ecological and enactive approach to mind and life make for a much happier marriage of ideas. We make our argument based on three points. First we argue that the free energy principle applies to the whole animal–environment system, and not only to the brain. Second, we show that active inference, as understood by the free-energy principle, is incompatible with unconscious inference understood as analagous to scientific hypothesis-testing, the main tenet of a Helmholtzian view of perception. Third, we argue that the notion of inference at work in Bayesian predictive coding under the free-energy principle is too weak to support a Helmholtzian theory of perception. Taken together these points imply that the free energy principle is best understood in ecological and enactive terms set out in this paper.",
"title": ""
},
{
"docid": "9098d40a9e16a1bd1ed0a9edd96f3258",
"text": "The filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) is being studied by many researchers as a key enabler for the fifth-generation air interface. In this paper, a hybrid peak-to-average power ratio (PAPR) reduction scheme is proposed for FBMC/OQAM signals by utilizing multi data block partial transmit sequence (PTS) and tone reservation (TR). In the hybrid PTS-TR scheme, the data blocks signal is divided into several segments, and the number of data blocks in each segment is determined by the overlapping factor. In each segment, we select the optimal data block to transmit and jointly consider the adjacent overlapped data block to achieve minimum signal power. Then, the peak reduction tones are utilized to cancel the peaks of the segment FBMC/OQAM signals. Simulation results and analysis show that the proposed hybrid PTS-TR scheme could provide better PAPR reduction than conventional PTS and TR schemes in FBMC/OQAM systems. Furthermore, we propose another multi data block hybrid PTS-TR scheme by exploiting the adjacent multi overlapped data blocks, called as the multi hybrid (M-hybrid) scheme. Simulation results show that the M-hybrid scheme can achieve about 0.2-dB PAPR performance better than the hybrid PTS-TR scheme.",
"title": ""
},
{
"docid": "50b316a52bdfacd5fe319818d0b22962",
"text": "Artificial neural networks (ANN) are used to predict 1) degree program completion, 2) earned hours, and 3) GPA for college students. The feed forward neural net architecture is used with the back propagation learning function and the logistic activation function. The database used for training and validation consisted of 17,476 student transcripts from Fall 1983 through Fall 1994. It is shown that age, gender, race, ACT scores, and reading level are significant in predicting the degree program completion, earned hours, and GPA. Of the three, earned hours proved the most difficult to predict.",
"title": ""
},
{
"docid": "ef8292e79b8c9f463281f2a9c5c410ef",
"text": "In real-time applications, the computer is often required to service programs in response to external signals, and to guarantee that each such program is completely processed within a specified interval following the occurrence of the initiating signal. Such programs are referred to in this paper as time-critical processes, or TCPs.",
"title": ""
},
{
"docid": "0e9e6c1f21432df9dfac2e7205105d46",
"text": "This paper summarises the COSET shared task organised as part of the IberEval workshop. The aim of this task is to classify the topic discussed in a tweet into one of five topics related to the Spanish 2015 electoral cycle. A new dataset was curated for this task and hand-labelled by experts on the task. Moreover, the results of the 17 participants of the task and a review of their proposed systems are presented. In a second phase evaluation, we provided the participants with 15.8 millions tweets in order to test the scalability of their systems.",
"title": ""
},
{
"docid": "e9b8787e5bb1f099e914db890e04dc23",
"text": "This paper presents the design of a compact UHF-RFID tag antenna with several miniaturization techniques including meandering technique and capacitive tip-loading structure. Additionally, T-matching technique is also utilized in the antenna design for impedance matching. This antenna was designed on Rogers 5880 printed circuit board (PCB) with the dimension of 43 × 26 × 0.787 mm3 and relative permittivity, □r of 2.2. The performance of the proposed antenna was analyzed in terms of matched impedance, antenna gain, return loss and tag reading range through the simulation in CST Microwave Studio software. As a result, the proposed antenna obtained a gain of 0.97dB and a maximum reading range of 5.15 m at 921 MHz.",
"title": ""
},
{
"docid": "1abcf9480879b3d29072f09d5be8609d",
"text": "Warm restart techniques on training deep neural networks often achieve better recognition accuracies and can be regarded as easy methods to obtain multiple neural networks with no additional training cost from a single training process. Ensembles of intermediate neural networks obtained by warm restart techniques can provide higher accuracy than a single neural network obtained finally by a whole training process. However, existing methods on both of warm restart and its ensemble techniques use fixed cyclic schedules and have little degree of parameter adaption. This paper extends a class of possible schedule strategies of warm restart, and clarifies their effectiveness for recognition performance. Specifically, we propose parameterized functions and various cycle schedules to improve recognition accuracies by the use of deep neural networks with no additional training cost. Experiments on CIFAR-10 and CIFAR-100 show that our methods can achieve more accurate rates than the existing cyclic training and ensemble methods.",
"title": ""
},
{
"docid": "1f6e92bc8239e358e8278d13ced4a0a9",
"text": "This paper proposes a method for hand pose estimation from RGB images that uses both external large-scale depth image datasets and paired depth and RGB images as privileged information at training time. We show that providing depth information during training significantly improves performance of pose estimation from RGB images during testing. We explore different ways of using this privileged information: (1) using depth data to initially train a depth-based network, (2) using the features from the depthbased network of the paired depth images to constrain midlevel RGB network weights, and (3) using the foreground mask, obtained from the depth data, to suppress the responses from the background area. By using paired RGB and depth images, we are able to supervise the RGB-based network to learn middle layer features that mimic that of the corresponding depth-based network, which is trained on large-scale, accurately annotated depth data. During testing, when only an RGB image is available, our method produces accurate 3D hand pose predictions. Our method is also tested on 2D hand pose estimation. Experiments on three public datasets show that the method outperforms the state-of-the-art methods for hand pose estimation using RGB image input.",
"title": ""
},
{
"docid": "1106cd6413b478fd32d250458a2233c5",
"text": "Submitted: Aug 7, 2013; Accepted: Sep 18, 2013; Published: Sep 25, 2013 Abstract: This article reviews the common used forecast error measurements. All error measurements have been joined in the seven groups: absolute forecasting errors, measures based on percentage errors, symmetric errors, measures based on relative errors, scaled errors, relative measures and other error measures. The formulas are presented and drawbacks are discussed for every accuracy measurements. To reduce the impact of outliers, an Integral Normalized Mean Square Error have been proposed. Due to the fact that each error measure has the disadvantages that can lead to inaccurate evaluation of the forecasting results, it is impossible to choose only one measure, the recommendations for selecting the appropriate error measurements are given.",
"title": ""
},
{
"docid": "81fc9abd3e2ad86feff7bd713cff5915",
"text": "With the advance of the Internet, e-commerce systems have become extremely important and convenient to human being. More and more products are sold on the web, and more and more people are purchasing products online. As a result, an increasing number of customers post product reviews at merchant websites and express their opinions and experiences in any network space such as Internet forums, discussion groups, and blogs. So there is a large amount of data records related to products on the Web, which are useful for both manufacturers and customers. Mining product reviews becomes a hot research topic, and prior researches mostly base on product features to analyze the opinions. So mining product features is the first step to further reviews processing. In this paper, we present how to mine product features. The proposed extraction approach is different from the previous methods because we only mine the features of the product in opinion sentences which the customers have expressed their positive or negative experiences on. In order to find opinion sentence, a SentiWordNet-based algorithm is proposed. There are three steps to perform our task: (1) identifying opinion sentences in each review which is positive or negative via SentiWordNet; (2) mining product features that have been commented on by customers from opinion sentences; (3) pruning feature to remove those incorrect features. Compared to previous work, our experimental result achieves higher precision and recall.",
"title": ""
},
{
"docid": "0cd2da131bf78526c890dae72514a8f0",
"text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4172a0c101756ea8207b65b0dfbbe8ce",
"text": "Inspired by ACTORS [7, 17], we have implemented an interpreter for a LISP-like language, SCHEME, based on the lambda calculus [2], but extended for side effects, multiprocessing, and process synchronization. The purpose of this implementation is tutorial. We wish to: 1. alleviate the confusion caused by Micro-PLANNER, CONNIVER, etc., by clarifying the embedding of non-recursive control structures in a recursive host language like LISP. 2. explain how to use these control structures, independent of such issues as pattern matching and data base manipulation. 3. have a simple concrete experimental domain for certain issues of programming semantics and style. This paper is organized into sections. The first section is a short “reference manual” containing specifications for all the unusual features of SCHEME. Next, we present a sequence of programming examples which illustrate various programming styles, and how to use them. This will raise certain issues of semantics which we will try to clarify with lambda calculus in the third section. In the fourth section we will give a general discussion of the issues facing an implementor of an interpreter for a language based on lambda calculus. Finally, we will present a completely annotated interpreter for SCHEME, written in MacLISP [13], to acquaint programmers with the tricks of the trade of implementing non-recursive control structures in a recursive language like LISP. This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory’s artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C0643. 1. The SCHEME Reference Manual SCHEME is essentially a full-funarg LISP. LAMBDAexpressions need not be QUOTEd, FUNCTIONed, or *FUNCTIONed when passed as arguments or returned as values; they will evaluate to closures of themselves. All LISP functions (i.e.,EXPRs,SUBRs, andLSUBRs, butnotFEXPRs,FSUBRs, orMACROs) are primitive operators in SCHEME, and have the same meaning as they have in LISP. Like LAMBDAexpressions, primitive operators and numbers are self-evaluating (they evaluate to trivial closures of themselves). There are a number of special primitives known as AINTs which are to SCHEME as FSUBRs are to LISP. We will enumerate them here. IF This is the primitive conditional operator. It takes three arguments. If the first evaluates to non-NIL , it evaluates the second expression, and otherwise the third. QUOTE As in LISP, this quotes the argument form so that it will be passed verbatim as data. The abbreviation “ ’FOO” may be used instead of “ (QUOTE FOO) ”. 406 SUSSMAN AND STEELE DEFINE This is analogous to the MacLISP DEFUNprimitive (but note that theLAMBDA must appear explicitly!). It is used for defining a function in the “global environment” permanently, as opposed to LABELS(see below), which is used for temporary definitions in a local environment.DEFINE takes a name and a lambda expression; it closes the lambda expression in the global environment and stores the closure in the LISP value cell of the name (which is a LISP atom). LABELS We have decided not to use the traditional LABEL primitive in this interpreter because it is difficult to define several mutually recursive functions using only LABEL. The solution, which Hewitt [17] also uses, is to adopt an ALGOLesque block syntax: (LABELS <function definition list> <expression>) This has the effect of evaluating the expression in an environment where all the functions are defined as specified by the definitions list. Furthermore, the functions are themselves closed in that environment, and not in the outer environment; this allows the functions to call themselvesand each otherecursively. For example, consider a function which counts all the atoms in a list structure recursively to all levels, but which doesn’t count the NIL s which terminate lists (but NIL s in theCARof some list count). In order to perform this we use two mutually recursive functions, one to count the car and one to count the cdr, as follows: (DEFINE COUNT (LAMBDA (L) (LABELS ((COUNTCAR (LAMBDA (L) (IF (ATOM L) 1 (+ (COUNTCAR (CAR L)) (COUNTCDR (CDR L)))))) (COUNTCDR (LAMBDA (L) (IF (ATOM L) (IF (NULL L) 0 1) (+ (COUNTCAR (CAR L)) (COUNTCDR (CDR L))))))) (COUNTCDR L)))) ;Note: COUNTCDR is defined here. ASET This is the side effect primitive. It is analogous to the LISP function SET. For example, to define a cell [17], we may useASETas follows: (DEFINE CONS-CELL (LAMBDA (CONTENTS) (LABELS ((THE-CELL (LAMBDA (MSG) (IF (EQ MSG ’CONTENTS?) CONTENTS (IF (EQ MSG ’CELL?) ’YES (IF (EQ (CAR MSG) ’<-) (BLOCK (ASET ’CONTENTS (CADR MSG)) THE-CELL) (ERROR ’|UNRECOGNIZED MESSAGE CELL| MSG ’WRNG-TYPE-ARG))))))) THE-CELL))) INTERPRETER FOR EXTENDED LAMBDA CALCULUS 407 Those of you who may complain about the lack of ASETQare invited to write(ASET’ foo bar) instead of(ASET ’foo bar) . EVALUATE This is similar to the LISP functionEVAL. It evaluates its argument, and then evaluates the resulting s-expression as SCHEME code. CATCH This is the “escape operator” which gives the user a handle on the control structure of the interpreter. The expression: (CATCH <identifier> <expression>) evaluates<expression> in an environment where <identifier> is bound to a continuation which is “just about to return from the CATCH”; that is, if the continuation is called as a function of one argument, then control proceeds as if the CATCHexpression had returned with the supplied (evaluated) argument as its value. For example, consider the following obscure definition of SQRT(Sussman’s favorite style/Steele’s least favorite): (DEFINE SQRT (LAMBDA (X EPSILON) ((LAMBDA (ANS LOOPTAG) (CATCH RETURNTAG (PROGN (ASET ’LOOPTAG (CATCH M M)) ;CREATE PROG TAG (IF (< (ABS (-$ (*$ ANS ANS) X)) EPSILON) (RETURNTAG ANS) ;RETURN NIL) ;JFCL (ASET ’ANS (//$ (+$ (//$ X ANS) ANS) 2.0)) (LOOPTAG LOOPTAG)))) ;GOTO 1.0 NIL))) Anyone who doesn’t understand how this manages to work probably should not attempt to useCATCH. As another example, we can define a THROWfunction, which may then be used with CATCHmuch as they are in LISP: (DEFINE THROW (LAMBDA (TAG RESULT) (TAG RESULT))) CREATE!PROCESS This is the process generator for multiprocessing. It takes one argument, an expression to be evaluated in the current environment as a separate parallel process. If the expression ever returns a value, the process automatically terminates. The value ofCREATE!PROCESSis a process id for the newly generated process. Note that the newly created process will not actually run until it is explicitly started. START!PROCESS This takes one argument, a process id, and starts up that process. It then runs. 408 SUSSMAN AND STEELE STOP!PROCESS This also takes a process id, but stops the process. The stopped process may be continued from where it was stopped by using START!PROCESSagain on it. The magic global variable**PROCESS** always contains the process id of the currently running process; thus a process can stop itself by doing (STOP!PROCESS **PROCESS**) . A stopped process is garbage collected if no live process has a pointer to its process id. EVALUATE!UNINTERRUPTIBLY This is the synchronization primitive. It evaluates an expression uninterruptibly; i.e., no other process may run until the expression has returned a value. Note that if a funarg is returned from the scope of an EVALUATE!UNINTERRUPTIBLY, then that funarg will be uninterruptible when it is applied; that is, the uninterruptibility property follows the rules of variable scoping. For example, consider the following function: (DEFINE SEMGEN (LAMBDA (SEMVAL) (LIST (LAMBDA () (EVALUATE!UNINTERRUPTIBLY (ASET’ SEMVAL (+ SEMVAL 1)))) (LABELS (P (LAMBDA () (EVALUATE!UNINTERRUPTIBLY (IF (PLUSP SEMVAL) (ASET’ SEMVAL (SEMVAL 1)) (P))))) P)))) This returns a pair of functions which are V and P operations on a newly created semaphore. The argument to SEMGENis the initial value for the semaphore. Note that P busy-waits by iterating if necessary; because EVALUATE!UNINTERRUPTIBLYuses variable-scoping rules, other processes have a chance to get in at the beginning of each iteration. This busy-wait can be made much more efficient by replacing the expression (P) in the definition ofP with ((LAMBDA (ME) (BLOCK (START!PROCESS (CREATE!PROCESS ’(START!PROCESS ME))) (STOP!PROCESS ME) (P))) **PROCESS**) Let’s see you figure this one out! Note that a STOP!PROCESSwithin anEVALUATE! UNINTERRUPTIBLYforces the process to be swapped out even if it is the current one, and so other processes get to run; but as soon as it gets swapped in again, others are locked out as before. Besides theAINTs, SCHEME has a class of primitives known as AMACRO s These are similar to MacLISPMACROs, in that they are expanded into equivalent code before being executed. Some AMACRO s supplied with the SCHEME interpreter: INTERPRETER FOR EXTENDED LAMBDA CALCULUS 409 COND This is like the MacLISPCONDstatement, except that singleton clauses (where the result of the predicate is the returned value) are not allowed. AND, OR These are also as in MacLISP. BLOCK This is like the MacLISPPROGN, but arranges to evaluate its last argument without an extra net control frame (explained later), so that the last argument may involved in an iteration. Note that in SCHEME, unlike MacLISP, the body of a LAMBDAexpression is not an implicit PROGN. DO This is like the MacLISP “new-style” DO; old-styleDOis not supported. AMAPCAR , AMAPLIST These are likeMAPCARandMAPLIST, but they expect a SCHEME lambda closure for the first argument. To use SCHEME, simply incant at DDT (on MIT-AI): 3",
"title": ""
}
] |
scidocsrr
|
589a982661939792dcd0fc1ff436e0da
|
vecteurs sphériques et interprétation géométrique des quaternions unitaires
|
[
{
"docid": "b47904279dee1695d67fafcf65b87895",
"text": "Some of the confusions concerning quaternions as they are employed in spacecraft attitude work are discussed. The order of quaternion multiplication is discussed in terms of its historical development and its consequences for the quaternion imaginaries. The di erent formulations for the quaternions are also contrasted. It is shown that the three Hamilton imaginaries cannot be interpreted as the basis of the vector space of physical vectors but only as constant numerical column vectors, the autorepresentation of a physical basis.",
"title": ""
}
] |
[
{
"docid": "45079629c4bc09cc8680b3d9ac325112",
"text": "Power consumption is of utmost concern in sensor networks. Researchers have several ways of measuring the power consumption of a complete sensor network, but they are typically either impractical or inaccurate. To meet the need for practical and scalable measurement of power consumption of sensor networks, we have developed a cycle-accurate simulator, called COOJA/MSPsim, that enables live power estimation of systems running on MSP430 processors. This demonstration shows the ease of use and the power measurement accuracy of COOJA/MSPsim. The demo setup consists of a small sensor network and a laptop. Beside gathering software-based power measurements from the motes, the laptop runs COOJA/MSPsim to simulate the same network.We visualize the power consumption of both the simulated and the real sensor network, and show that the simulator produces matching results.",
"title": ""
},
{
"docid": "c70e11160c90bd67caa2294c499be711",
"text": "The vital sign monitoring through Impulse Radio Ultra-Wide Band (IR-UWB) radar provides continuous assessment of a patient's respiration and heart rates in a non-invasive manner. In this paper, IR UWB radar is used for monitoring respiration and the human heart rate. The breathing and heart rate frequencies are extracted from the signal reflected from the human body. A Kalman filter is applied to reduce the measurement noise from the vital signal. An algorithm is presented to separate the heart rate signal from the breathing harmonics. An auto-correlation based technique is applied for detecting random body movements (RBM) during the measurement process. Experiments were performed in different scenarios in order to show the validity of the algorithm. The vital signs were estimated for the signal reflected from the chest, as well as from the back side of the body in different experiments. The results from both scenarios are compared for respiration and heartbeat estimation accuracy.",
"title": ""
},
{
"docid": "928eb797289d2630ff2e701ced782a14",
"text": "The restricted Boltzmann machine (RBM) has received an increasing amount of interest in recent years. It determines good mapping weights that capture useful latent features in an unsupervised manner. The RBM and its generalizations have been successfully applied to a variety of image classification and speech recognition tasks. However, most of the existing RBM-based models disregard the preservation of the data manifold structure. In many real applications, the data generally reside on a low-dimensional manifold embedded in high-dimensional ambient space. In this brief, we propose a novel graph regularized RBM to capture features and learning representations, explicitly considering the local manifold structure of the data. By imposing manifold-based locality that preserves constraints on the hidden layer of the RBM, the model ultimately learns sparse and discriminative representations. The representations can reflect data distributions while simultaneously preserving the local manifold structure of data. We test our model using several benchmark image data sets for unsupervised clustering and supervised classification problem. The results demonstrate that the performance of our method exceeds the state-of-the-art alternatives.",
"title": ""
},
{
"docid": "50ec9d25a24e67481a4afc6a9519b83c",
"text": "Weakly supervised image segmentation is an important yet challenging task in image processing and pattern recognition fields. It is defined as: in the training stage, semantic labels are only at the image-level, without regard to their specific object/scene location within the image. Given a test image, the goal is to predict the semantics of every pixel/superpixel. In this paper, we propose a new weakly supervised image segmentation model, focusing on learning the semantic associations between superpixel sets (graphlets in this paper). In particular, we first extract graphlets from each image, where a graphlet is a small-sized graph measures the potential of multiple spatially neighboring superpixels (i.e., the probability of these superpixels sharing a common semantic label, such as the sky or the sea). To compare different-sized graphlets and to incorporate image-level labels, a manifold embedding algorithm is designed to transform all graphlets into equal-length feature vectors. Finally, we present a hierarchical Bayesian network to capture the semantic associations between postembedding graphlets, based on which the semantics of each superpixel is inferred accordingly. Experimental results demonstrate that: 1) our approach performs competitively compared with the state-of-the-art approaches on three public data sets and 2) considerable performance enhancement is achieved when using our approach on segmentation-based photo cropping and image categorization.",
"title": ""
},
{
"docid": "e48313fd23a22c96cceb62434b044e43",
"text": "It is unclear whether combined leg and arm high-intensity interval training (HIIT) improves fitness and morphological characteristics equal to those of leg-based HIIT programs. The aim of this study was to compare the effects of HIIT using leg-cycling (LC) and arm-cranking (AC) ergometers with an HIIT program using only LC. Effects on aerobic capacity and skeletal muscle were analyzed. Twelve healthy male subjects were assigned into two groups. One performed LC-HIIT (n=7) and the other LC- and AC-HIIT (n=5) twice weekly for 16 weeks. The training programs consisted of eight to 12 sets of >90% VO2 (the oxygen uptake that can be utilized in one minute) peak for 60 seconds with a 60-second active rest period. VO2 peak, watt peak, and heart rate were measured during an LC incremental exercise test. The cross-sectional area (CSA) of trunk and thigh muscles as well as bone-free lean body mass were measured using magnetic resonance imaging and dual-energy X-ray absorptiometry. The watt peak increased from baseline in both the LC (23%±38%; P<0.05) and the LC-AC groups (11%±9%; P<0.05). The CSA of the quadriceps femoris muscles also increased from baseline in both the LC (11%±4%; P<0.05) and the LC-AC groups (5%±5%; P<0.05). In contrast, increases were observed in the CSA of musculus psoas major (9%±11%) and musculus anterolateral abdominal (7%±4%) only in the LC-AC group. These results suggest that a combined LC- and AC-HIIT program improves aerobic capacity and muscle hypertrophy in both leg and trunk muscles.",
"title": ""
},
{
"docid": "49b0ba019f6f968804608aeacec2a959",
"text": "In this paper, we introduce a novel problem of audio-visual event localization in unconstrained videos. We define an audio-visual event as an event that is both visible and audible in a video segment. We collect an Audio-Visual Event (AVE) dataset to systemically investigate three temporal localization tasks: supervised and weakly-supervised audio-visual event localization, and cross-modality localization. We develop an audio-guided visual attention mechanism to explore audio-visual correlations, propose a dual multimodal residual network (DMRN) to fuse information over the two modalities, and introduce an audio-visual distance learning network to handle the cross-modality localization. Our experiments support the following findings: joint modeling of auditory and visual modalities outperforms independent modeling, the learned attention can capture semantics of sounding objects, temporal alignment is important for audio-visual fusion, the proposed DMRN is effective in fusing audio-visual features, and strong correlations between the two modalities enable cross-modality localization.",
"title": ""
},
{
"docid": "ac9a8cd0b53ff3f2e9de002fa9a66121",
"text": "Life-span developmental psychology involves the study of constancy and change in behavior throughout the life course. One aspect of life-span research has been the advancement of a more general, metatheoretical view on the nature of development. The family of theoretical perspectives associated with this metatheoretical view of life-span developmental psychology includes the recognition of multidirectionality in ontogenetic change, consideration of both age-connected and disconnected developmental factors, a focus on the dynamic and continuous interplay between growth (gain) and decline (loss), emphasis on historical embeddedness and other structural contextual factors, and the study of the range of plasticity in development. Application of the family of perspectives associated with life-span developmental psychology is illustrated for the domain of intellectual development. Two recently emerging perspectives of the family of beliefs are given particular attention. The first proposition is methodological and suggests that plasticity can best be studied with a research strategy called testing-the-limits. The second proposition is theoretical and proffers that any developmental change includes the joint occurrence of gain (growth) and loss (decline) in adaptive capacity. To assess the pattern of positive (gains) and negative (losses) consequences resulting from development, it is necessary to know the criterion demands posed by the individual and the environment during the lifelong process of adaptation.",
"title": ""
},
{
"docid": "64f15815e4c1c94c3dfd448dec865b85",
"text": "Modern software systems are typically large and complex, making comprehension of these systems extremely difficult. Experienced programmers comprehend code by seamlessly processing synonyms and other word relations. Thus, we believe that automated comprehension and software tools can be significantly improved by leveraging word relations in software. In this paper, we perform a comparative study of six state of the art, English-based semantic similarity techniques and evaluate their effectiveness on words from the comments and identifiers in software. Our results suggest that applying English-based semantic similarity techniques to software without any customization could be detrimental to the performance of the client software tools. We propose strategies to customize the existing semantic similarity techniques to software, and describe how various program comprehension tools can benefit from word relation information.",
"title": ""
},
{
"docid": "7f51bdc05c4a1bf610f77b629d8602f7",
"text": "Special Issue Anthony Vance Brigham Young University [email protected] Bonnie Brinton Anderson Brigham Young University [email protected] C. Brock Kirwan Brigham Young University [email protected] Users’ perceptions of risks have important implications for information security because individual users’ actions can compromise entire systems. Therefore, there is a critical need to understand how users perceive and respond to information security risks. Previous research on perceptions of information security risk has chiefly relied on self-reported measures. Although these studies are valuable, risk perceptions are often associated with feelings—such as fear or doubt—that are difficult to measure accurately using survey instruments. Additionally, it is unclear how these self-reported measures map to actual security behavior. This paper contributes to this topic by demonstrating that risk-taking behavior is effectively predicted using electroencephalography (EEG) via event-related potentials (ERPs). Using the Iowa Gambling Task, a widely used technique shown to be correlated with real-world risky behaviors, we show that the differences in neural responses to positive and negative feedback strongly predict users’ information security behavior in a separate laboratory-based computing task. In addition, we compare the predictive validity of EEG measures to that of self-reported measures of information security risk perceptions. Our experiments show that self-reported measures are ineffective in predicting security behaviors under a condition in which information security is not salient. However, we show that, when security concerns become salient, self-reported measures do predict security behavior. Interestingly, EEG measures significantly predict behavior in both salient and non-salient conditions, which indicates that EEG measures are a robust predictor of security behavior.",
"title": ""
},
{
"docid": "9b013f0574cc8fd4139a94aa5cf84613",
"text": "Monte Carlo Tree Search (MCTS) methods have proven powerful in planning for sequential decision-making problems such as Go and video games, but their performance can be poor when the planning depth and sampling trajectories are limited or when the rewards are sparse. We present an adaptation of PGRD (policy-gradient for rewarddesign) for learning a reward-bonus function to improve UCT (a MCTS algorithm). Unlike previous applications of PGRD in which the space of reward-bonus functions was limited to linear functions of hand-coded state-action-features, we use PGRD with a multi-layer convolutional neural network to automatically learn features from raw perception as well as to adapt the non-linear reward-bonus function parameters. We also adopt a variance-reducing gradient method to improve PGRD’s performance. The new method improves UCT’s performance on multiple ATARI games compared to UCT without the reward bonus. Combining PGRD and Deep Learning in this way should make adapting rewards for MCTS algorithms far more widely and practically applicable than before.",
"title": ""
},
{
"docid": "729cb5a59c1458ce6c9ef3fa29ca1d98",
"text": "The Simulink/Stateflow toolset is an integrated suite enabling model-based design and has become popular in the automotive and aeronautics industries. We have previously developed a translator called Simtolus from Simulink to the synchronous language Lustre and we build upon that work by encompassing Stateflow as well. Stateflow is problematical for synchronous languages because of its unbounded behaviour so we propose analysis techniques to define a subset of Stateflow for which we can define a synchronous semantics. We go further and define a \"safe\" subset of Stateflow which elides features which are potential sources of errors in Stateflow designs. We give an informal presentation of the Stateflow to Lustre translation process and show how our model-checking tool Lesar can be used to verify some of the semantical checks we have proposed. Finally, we present a small case-study.",
"title": ""
},
{
"docid": "a3772746888956cf78e56084f74df0bf",
"text": "Emerging interest of trading companies and hedge funds in mining social web has created new avenues for intelligent systems that make use of public opinion in driving investment decisions. It is well accepted that at high frequency trading, investors are tracking memes rising up in microblogging forums to count for the public behavior as an important feature while making short term investment decisions. We investigate the complex relationship between tweet board literature (like bullishness, volume, agreement etc) with the financial market instruments (like volatility, trading volume and stock prices). We have analyzed Twitter sentiments for more than 4 million tweets between June 2010 and July 2011 for DJIA, NASDAQ-100 and 11 other big cap technological stocks. Our results show high correlation (upto 0.88 for returns) between stock prices and twitter sentiments. Further, using Granger’s Causality Analysis, we have validated that the movement of stock prices and indices are greatly affected in the short term by Twitter discussions. Finally, we have implemented Expert Model Mining System (EMMS) to demonstrate that our forecasted returns give a high value of R-square (0.952) with low Maximum Absolute Percentage Error (MaxAPE) of 1.76% for Dow Jones Industrial Average (DJIA). We introduce a novel way to make use of market monitoring elements derived from public mood to retain a portfolio within limited risk state (highly improved hedging bets) during typical market conditions.",
"title": ""
},
{
"docid": "139a89ce2fcdfb987aa3476d3618b919",
"text": "Automating the development of construction schedules has been an interesting topic for researchers around the world for almost three decades. Researchers have approached solving scheduling problems with different tools and techniques. Whenever a new artificial intelligence or optimization tool has been introduced, researchers in the construction field have tried to use it to find the answer to one of their key problems—the “better” construction schedule. Each researcher defines this “better” slightly different. This article reviews the research on automation in construction scheduling from 1985 to 2014. It also covers the topic using different approaches, including case-based reasoning, knowledge-based approaches, model-based approaches, genetic algorithms, expert systems, neural networks, and other methods. The synthesis of the results highlights the share of the aforementioned methods in tackling the scheduling challenge, with genetic algorithms shown to be the most dominant approach. Although the synthesis reveals the high applicability of genetic algorithms to the different aspects of managing a project, including schedule, cost, and quality, it exposed a more limited project management application for the other methods.",
"title": ""
},
{
"docid": "1670dda371458257c8f86390b398b3f8",
"text": "Latent topic model such as Latent Dirichlet Allocation (LDA) has been designed for text processing and has also demonstrated success in the task of audio related processing. The main idea behind LDA assumes that the words of each document arise from a mixture of topics, each of which is a multinomial distribution over the vocabulary. When applying the original LDA to process continuous data, the wordlike unit need be first generated by vector quantization (VQ). This data discretization usually results in information loss. To overcome this shortage, this paper introduces a new topic model named GaussianLDA for audio retrieval. In the proposed model, we consider continuous emission probability, Gaussian instead of multinomial distribution. This new topic model skips the vector quantization and directly models each topic as a Gaussian distribution over audio features. It avoids discretization by this way and integrates the procedure of clustering. The experiments of audio retrieval demonstrate that GaussianLDA achieves better performance than other compared methods. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "39d3f1a5d40325bdc4bca9ee50241c9e",
"text": "This paper reviews the recent progress of quantum-dot semiconductor optical amplifiers developed as ultrawideband polarization-insensitive high-power amplifiers, high-speed signal regenerators, and wideband wavelength converters. A semiconductor optical amplifier having a gain of > 25 dB, noise figure of < 5 dB, and 3-dB saturation output power of > 20 dBm, over the record widest bandwidth of 90 nm among all kinds of optical amplifiers, and also having a penalty-free output power of 23 dBm, the record highest among all the semiconductor optical amplifiers, was realized by using quantum dots. By utilizing isotropically shaped quantum dots, the TM gain, which is absent in the standard Stranski-Krastanow QDs, has been drastically enhanced, and nearly polarization-insensitive SOAs have been realized for the first time. With an ultrafast gain response unique to quantum dots, an optical regenerator having receiver-sensitivity improving capability of 4 dB at a BER of 10-9 and operating speed of > 40 Gb/s has been successfully realized with an SOA chip. This performance achieved together with simplicity of structure suggests a potential for low-cost realization of regenerative transmission systems.",
"title": ""
},
{
"docid": "0aabb07ef22ef59d6573172743c6378b",
"text": "Learning from multiple sources of information is an important problem in machine-learning research. The key challenges are learning representations and formulating inference methods that take into account the complementarity and redundancy of various information sources. In this paper we formulate a variational autoencoder based multi-source learning framework in which each encoder is conditioned on a different information source. This allows us to relate the sources via the shared latent variables by computing divergence measures between individual source’s posterior approximations. We explore a variety of options to learn these encoders and to integrate the beliefs they compute into a consistent posterior approximation. We visualise learned beliefs on a toy dataset and evaluate our methods for learning shared representations and structured output prediction, showing trade-offs of learning separate encoders for each information source. Furthermore, we demonstrate how conflict detection and redundancy can increase robustness of inference in a multi-source setting.",
"title": ""
},
{
"docid": "2b32087daf5c104e60f91ebf19cd744d",
"text": "A large amount of food photos are taken in restaurants for diverse reasons. This dish recognition problem is very challenging, due to different cuisines, cooking styles and the intrinsic difficulty of modeling food from its visual appearance. Contextual knowledge is crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and geolocation of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then we reformulate the problem using a probabilistic model connecting dishes, restaurants and geolocations. We apply that model in three different tasks: dish recognition, restaurant recognition and geolocation refinement. Experiments on a dataset including 187 restaurants and 701 dishes show that combining multiple evidences (visual, geolocation, and external knowledge) can boost the performance in all tasks.",
"title": ""
},
{
"docid": "c4332dfb8e8117c3deac7d689b8e259b",
"text": "Learning through experience is time-consuming, inefficient and often bad for your cortisol levels. To address this problem, a number of recently proposed teacherstudent methods have demonstrated the benefits of private tuition, in which a single model learns from an ensemble of more experienced tutors. Unfortunately, the cost of such supervision restricts good representations to a privileged minority. Unsupervised learning can be used to lower tuition fees, but runs the risk of producing networks that require extracurriculum learning to strengthen their CVs and create their own LinkedIn profiles1. Inspired by the logo on a promotional stress ball at a local recruitment fair, we make the following three contributions. First, we propose a novel almost no supervision training algorithm that is effective, yet highly scalable in the number of student networks being supervised, ensuring that education remains affordable. Second, we demonstrate our approach on a typical use case: learning to bake, developing a method that tastily surpasses the current state of the art. Finally, we provide a rigorous quantitive analysis of our method, proving that we have access to a calculator2. Our work calls into question the long-held dogma that life is the best teacher. Give a student a fish and you feed them for a day, teach a student to gatecrash seminars and you feed them until the day they move to Google.",
"title": ""
},
{
"docid": "ef2738cfced7ef069b13e5b5dca1558b",
"text": "Organic agriculture (OA) is practiced on 1% of the global agricultural land area and its importance continues to grow. Specifically, OA is perceived by many as having less Advances inAgronomy, ISSN 0065-2113 © 2016 Elsevier Inc. http://dx.doi.org/10.1016/bs.agron.2016.05.003 All rights reserved. 1 ARTICLE IN PRESS",
"title": ""
}
] |
scidocsrr
|
443fef97c28d08cad56443529380e197
|
Gust loading factor — past , present and future
|
[
{
"docid": "c49ae120bca82ef0d9e94115ad7107f2",
"text": "An evaluation and comparison of seven of the world’s major building codes and standards is conducted in this study, with specific discussion of their estimations of the alongwind, acrosswind, and torsional response, where applicable, for a given building. The codes and standards highlighted by this study are those of the United States, Japan, Australia, the United Kingdom, Canada, China, and Europe. In addition, the response predicted by using the measured power spectra of the alongwind, acrosswind, and torsional responses for several building shapes tested in a wind tunnel are presented, and a comparison between the response predicted by wind tunnel data and that estimated by some of the standards is conducted. This study serves not only as a comparison of the response estimates by international codes and standards, but also introduces a new set of wind tunnel data for validation of wind tunnel-based empirical expressions. 1.0 Introduction Under the influence of dynamic wind loads, typical high-rise buildings oscillate in the alongwind, acrosswind, and torsional directions. The alongwind motion primarily results from pressure fluctuations on the windward and leeward faces, which generally follows the fluctuations in the approach flow, at least in the low frequency range. Therefore, alongwind aerodynamic loads may be quantified analytically utilizing quasi-steady and strip theories, with dynamic effects customarily represented by a random-vibrationbased “Gust Factor Approach” (Davenport 1967, Vellozzi & Cohen 1968, Vickery 1970, Simiu 1976, Solari 1982, ESDU 1989, Gurley & Kareem 1993). However, the acrosswind motion is introduced by pressure fluctuations on the side faces which are influenced by fluctuations in the separated shear layers and wake dynamics (Kareem 1982). This renders the applicability of strip and quasi-steady theories rather doubtful. Similarly, the wind-induced torsional effects result from an imbalance in the instantaneous pressure distribution on the building surface. These load effects are further amplified in asymmetric buildings as a result of inertial coupling (Kareem 1985). Due to the complexity of the acrosswind and torsional responses, physical modeling of fluid-structure interactions remains the only viable means of obtaining information on wind loads, though recently, research in the area of computational fluid dynam1. Graduate Student & Corresponding Author, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556. e-mail: [email protected] 2. Professor, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556",
"title": ""
},
{
"docid": "a4cc4d7bf07ee576a9b5a5fdddc02024",
"text": "Most international codes and standards provide guidelines and procedures for assessing the along-wind effec structures. Despite their common use of the ‘‘gust loading factor’’ ~GLF! approach, sizeable scatter exists among the wind eff predicted by the various codes and standards under similar flow conditions. This paper presents a comprehensive assessment o of this scatter through a comparison of the along-wind loads and their effects on tall buildings recommended by major internation and standards. ASCE 7-98 ~United States !, AS1170.2-89~Australia!, NBC-1995~Canada!, RLB-AIJ-1993 ~Japan!, and Eurocode-1993 ~Europe! are examined in this study. The comparisons consider the definition of wind characteristics, mean wind loads, GLF, eq static wind loads, and attendant wind load effects. It is noted that the scatter in the predicted wind loads and their effects arises from the variations in the definition of wind field characteristics in the respective codes and standards. A detailed example is pre illustrate the overall comparison and to highlight the main findings of this paper. DOI: 10.1061/ ~ASCE!0733-9445~2002!128:6~788! CE Database keywords: Buildings, highrise; Building codes; Wind loads; Dynamics; Wind velocity.",
"title": ""
},
{
"docid": "90b248a3b141fc55eb2e55d274794953",
"text": "The aerodynamic admittance function (AAF) has been widely invoked to relate wind pressures on building surfaces to the oncoming wind velocity. In current practice, strip and quasi-steady theories are generally employed in formulating wind effects in the along-wind direction. These theories permit the representation of the wind pressures on building surfaces in terms of the oncoming wind velocity field. Synthesis of the wind velocity field leads to a generalized wind load that employs the AAF. This paper reviews the development of the current AAF in use. It is followed by a new definition of the AAF, which is based on the base bending moment. It is shown that the new AAF is numerically equivalent to the currently used AAF for buildings with linear mode shape and it can be derived experimentally via high frequency base balance. New AAFs for square and rectangular building models were obtained and compared with theoretically derived expressions. Some discrepancies between experimentally and theoretically derived AAFs in the high frequency range were noted.",
"title": ""
},
{
"docid": "71c34b48cd22a0a8bc9b507e05919301",
"text": "Under the action of wind, tall buildings oscillate simultaneously in the alongwind, acrosswind, and torsional directions. While the alongwind loads have been successfully treated using quasi-steady and strip theories in terms of gust loading factors, the acrosswind and torsional loads cannot be treated in this manner, since these loads cannot be related in a straightforward manner to the fluctuations in the approach flow. Accordingly, most current codes and standards provide little guidance for the acrosswind and torsional response. To fill this gap, a preliminary, interactive database of aerodynamic loads is presented, which can be accessed by any user with Microsoft Explorer at the URL address http://www.nd.edu/;nathaz/. The database is comprised of high-frequency base balance measurements on a host of isolated tall building models. Combined with the analysis procedure provided, the nondimensional aerodynamic loads can be used to compute the wind-induced response of tall buildings. The influence of key parameters, such as the side ratio, aspect ratio, and turbulence characteristics for rectangular sections, is also discussed. The database and analysis procedure are viable candidates for possible inclusion as a design guide in the next generation of codes and standards. DOI: 10.1061/~ASCE!0733-9445~2003!129:3~394! CE Database keywords: Aerodynamics; Wind loads; Wind tunnels; Databases; Random vibration; Buildings, high-rise; Turbulence. 394 / JOURNAL OF STRUCTURAL ENGINEERING © ASCE / MARCH 2003 tic model tests are presently used as routine tools in commercial design practice. However, considering the cost and lead time needed for wind tunnel testing, a simplified procedure would be desirable in the preliminary design stages, allowing early assessment of the structural resistance, evaluation of architectural or structural changes, or assessment of the need for detailed wind tunnel tests. Two kinds of wind tunnel-based procedures have been introduced in some of the existing codes and standards to treat the acrosswind and torsional response. The first is an empirical expression for the wind-induced acceleration, such as that found in the National Building Code of Canada ~NBCC! ~NRCC 1996!, while the second is an aerodynamic-load-based procedure such as those in Australian Standard ~AS 1989! and the Architectural Institute of Japan ~AIJ! Recommendations ~AIJ 1996!. The latter approach offers more flexibility as the aerodynamic load provided can be used to determine the response of any structure having generally the same architectural features and turbulence environment of the tested model, regardless of its structural characteristics. Such flexibility is made possible through the use of well-established wind-induced response analysis procedures. Meanwhile, there are some databases involving isolated, generic building shapes available in the literature ~e.g., Kareem 1988; Choi and Kanda 1993; Marukawa et al. 1992!, which can be expanded using HFBB tests. For example, a number of commercial wind tunnel facilities have accumulated data of actual buildings in their natural surroundings, which may be used to supplement the overall loading database. Though such HFBB data has been collected, it has not been assimilated and made accessible to the worldwide community, to fully realize its potential. Fortunately, the Internet now provides the opportunity to pool and archive the international stores of wind tunnel data. This paper takes the first step in that direction by introducing an interactive database of aerodynamic loads obtained from HFBB measurements on a host of isolated tall building models, accessible to the worldwide Internet community via Microsoft Explorer at the URL address http://www.nd.edu/;nathaz. Through the use of this interactive portal, users can select the Engineer, Malouf Engineering International, Inc., 275 W. Campbell Rd., Suite 611, Richardson, TX 75080; Fomerly, Research Associate, NatHaz Modeling Laboratory, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: [email protected] Graduate Student, NatHaz Modeling Laboratory, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: [email protected] Robert M. Moran Professor, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: [email protected]. Note. Associate Editor: Bogusz Bienkiewicz. Discussion open until August 1, 2003. Separate discussions must be submitted for individual papers. To extend the closing date by one month, a written request must be filed with the ASCE Managing Editor. The manuscript for this paper was submitted for review and possible publication on April 24, 2001; approved on December 11, 2001. This paper is part of the Journal of Structural Engineering, Vol. 129, No. 3, March 1, 2003. ©ASCE, ISSN 0733-9445/2003/3-394–404/$18.00. Introduction Under the action of wind, typical tall buildings oscillate simultaneously in the alongwind, acrosswind, and torsional directions. It has been recognized that for many high-rise buildings the acrosswind and torsional response may exceed the alongwind response in terms of both serviceability and survivability designs ~e.g., Kareem 1985!. Nevertheless, most existing codes and standards provide only procedures for the alongwind response and provide little guidance for the critical acrosswind and torsional responses. This is partially attributed to the fact that the acrosswind and torsional responses, unlike the alongwind, result mainly from the aerodynamic pressure fluctuations in the separated shear layers and wake flow fields, which have prevented, to date, any acceptable direct analytical relation to the oncoming wind velocity fluctuations. Further, higher-order relationships may exist that are beyond the scope of the current discussion ~Gurley et al. 2001!. Wind tunnel measurements have thus served as an effective alternative for determining acrosswind and torsional loads. For example, the high-frequency base balance ~HFBB! and aeroelasgeometry and dimensions of a model building, from the available choices, and specify an urban or suburban condition. Upon doing so, the aerodynamic load spectra for the alongwind, acrosswind, or torsional response is displayed along with a Java interface that permits users to specify a reduced frequency of interest and automatically obtain the corresponding spectral value. When coupled with the concise analysis procedure, discussion, and example provided, the database provides a comprehensive tool for computation of the wind-induced response of tall buildings. Wind-Induced Response Analysis Procedure Using the aerodynamic base bending moment or base torque as the input, the wind-induced response of a building can be computed using random vibration analysis by assuming idealized structural mode shapes, e.g., linear, and considering the special relationship between the aerodynamic moments and the generalized wind loads ~e.g., Tschanz and Davenport 1983; Zhou et al. 2002!. This conventional approach yields only approximate estimates of the mode-generalized torsional moments and potential inaccuracies in the lateral loads if the sway mode shapes of the structure deviate significantly from the linear assumption. As a result, this procedure often requires the additional step of mode shape corrections to adjust the measured spectrum weighted by a linear mode shape to the true mode shape ~Vickery et al. 1985; Boggs and Peterka 1989; Zhou et al. 2002!. However, instead of utilizing conventional generalized wind loads, a base-bendingmoment-based procedure is suggested here for evaluating equivalent static wind loads and response. As discussed in Zhou et al. ~2002!, the influence of nonideal mode shapes is rather negligible for base bending moments, as opposed to other quantities like base shear or generalized wind loads. As a result, base bending moments can be used directly, presenting a computationally efficient scheme, averting the need for mode shape correction and directly accommodating nonideal mode shapes. Application of this procedure for the alongwind response has proven effective in recasting the traditional gust loading factor approach in a new format ~Zhou et al. 1999; Zhou and Kareem 2001!. The procedure can be conveniently adapted to the acrosswind and torsional response ~Boggs and Peterka 1989; Kareem and Zhou 2003!. It should be noted that the response estimation based on the aerodynamic database is not advocated for acrosswind response calculations in situations where the reduced frequency is equal to or slightly less than the Strouhal number ~Simiu and Scanlan 1996; Kijewski et al. 2001!. In such cases, the possibility of negative aerodynamic damping, a manifestation of motion-induced effects, may cause the computed results to be inaccurate ~Kareem 1982!. Assuming a stationary Gaussian process, the expected maximum base bending moment response in the alongwind or acrosswind directions or the base torque response can be expressed in the following form:",
"title": ""
}
] |
[
{
"docid": "45a098c09a3803271f218fafd4d951cd",
"text": "Recent years have seen a tremendous increase in the demand for wireless bandwidth. To support this demand by innovative and resourceful use of technology, future communication systems will have to shift towards higher carrier frequencies. Due to the tight regulatory situation, frequencies in the atmospheric attenuation window around 300 GHz appear very attractive to facilitate an indoor, short range, ultra high speed THz communication system. In this paper, we investigate the influence of diffuse scattering at such high frequencies on the characteristics of the communication channel and its implications on the non-line-of-sight propagation path. The Kirchhoff approach is verified by an experimental study of diffuse scattering from randomly rough surfaces commonly encountered in indoor environments using a fiber-coupled terahertz time-domain spectroscopy system to perform angle- and frequency-dependent measurements. Furthermore, we integrate the Kirchhoff approach into a self-developed ray tracing algorithm to model the signal coverage of a typical office scenario.",
"title": ""
},
{
"docid": "1145d2375414afbdd5f1e6e703638028",
"text": "Content addressable memories (CAMs) are very attractive for high-speed table lookups in modern network systems. This paper presents a low-power dual match line (ML) ternary CAM (TCAM) to address the power consumption issue of CAMs. The highly capacitive ML is divided into two segments to reduce the active capacitance and hence the power. We analyze possible cases of mismatches and demonstrate a significant reduction in power (up to 43%) for a small penalty in search speed (4%).",
"title": ""
},
{
"docid": "469c17aa0db2c70394f081a9a7c09be5",
"text": "The potential of blockchain technology has received attention in the area of FinTech — the combination of finance and technology. Blockchain technology was first introduced as the technology behind the Bitcoin decentralized virtual currency, but there is the expectation that its characteristics of accurate and irreversible data transfer in a decentralized P2P network could make other applications possible. Although a precise definition of blockchain technology has not yet been given, it is important to consider how to classify different blockchain systems in order to better understand their potential and limitations. The goal of this paper is to add to the discussion on blockchain technology by proposing a classification based on two dimensions external to the system: (1) existence of an authority (without an authority and under an authority) and (2) incentive to participate in the blockchain (market-based and non-market-based). The combination of these elements results in four types of blockchains. We define these dimensions and describe the characteristics of the blockchain systems belonging to each classification.",
"title": ""
},
{
"docid": "26f76aa41a64622ee8f0eaaed2aac529",
"text": "OBJECTIVE\nIn this study, we explored the impact of an occupational therapy wellness program on daily habits and routines through the perspectives of youth and their parents.\n\n\nMETHOD\nData were collected through semistructured interviews with children and their parents, the Pizzi Healthy Weight Management Assessment(©), and program activities.\n\n\nRESULTS\nThree themes emerged from the interviews: Program Impact, Lessons Learned, and Time as a Barrier to Health. The most common areas that both youth and parents wanted to change were time spent watching television and play, fun, and leisure time. Analysis of activity pie charts indicated that the youth considerably increased active time in their daily routines from Week 1 to Week 6 of the program.\n\n\nCONCLUSION\nAn occupational therapy program focused on health and wellness may help youth and their parents be more mindful of their daily activities and make health behavior changes.",
"title": ""
},
{
"docid": "d0f4021050e620770f5546171cbfccdc",
"text": "This paper presents a compact 10-bit digital-to-analog converter (DAC) for LCD source drivers. The cyclic DAC architecture is used to reduce the area of LCD column drivers when compared to the use of conventional resistor-string DACs. The current interpolation technique is proposed to perform gamma correction after D/A conversion. The gamma correction circuit is shared by four DAC channels using the interleave technique. A prototype 10-bit DAC with gamma correction function is implemented in 0.35 μm CMOS technology and its average die size per channel is 0.053 mm2, which is smaller than those of the R-DACs with gamma correction function. The settling time of the 10-bit DAC is 1 μs, and the maximum INL and DNL are 2.13 least significant bit (LSB) and 1.30 LSB, respectively.",
"title": ""
},
{
"docid": "1cacfd4da5273166debad8a6c1b72754",
"text": "This article presents a paradigm case portrait of female romantic partners of heavy pornography users. Based on a sample of 100 personal letters, this portrait focuses on their often traumatic discovery of the pornography usage and the significance they attach to this usage for (a) their relationships, (b) their own worth and desirability, and (c) the character of their partners. Finally, we provide a number of therapeutic recommendations for helping these women to think and act more effectively in their very difficult circumstances.",
"title": ""
},
{
"docid": "7148253937ac85f308762f906727d1b5",
"text": "Object detection methods like Single Shot Multibox Detector (SSD) provide highly accurate object detection that run in real-time. However, these approaches require a large number of annotated training images. Evidently, not all of these images are equally useful for training the algorithms. Moreover, obtaining annotations in terms of bounding boxes for each image is costly and tedious. In this paper, we aim to obtain a highly accurate object detector using only a fraction of the training images. We do this by adopting active learning that uses ‘human in the loop’ paradigm to select the set of images that would be useful if annotated. Towards this goal, we make the following contributions: 1. We develop a novel active learning method which poses the layered architecture used in object detection as a ‘query by committee’ paradigm to choose the set of images to be queried. 2. We introduce a framework to use the exploration/exploitation trade-off in our methods. 3. We analyze the results on standard object detection datasets which show that with only a third of the training data, we can obtain more than 95% of the localization accuracy of full supervision. Further our methods outperform classical uncertainty-based active learning algorithms like maximum entropy.",
"title": ""
},
{
"docid": "c1ffc050eaee547bd0eb070559ffc067",
"text": "This paper proposes a method for designing a sentence set for utterances taking account of prosody. This method is based on a measure of coverage which incorporates two factors: (1) the distribution of voice fundamental frequency and phoneme duration predicted by the prosody generation module of a TTS; (2) perceptual damage to naturalness due to prosody modification. A set of 500 sentences with a predicted coverage of 82.6% was designed by this method, and used to collect a speech corpus. The obtained speech corpus yielded 88% of the predicted coverage. The data size was reduced to 49% in terms of number of sentences (89% in terms of number of phonemes) compared to a general-purpose corpus designed without taking prosody into account.",
"title": ""
},
{
"docid": "3f1b7062e978da9c4f9675b926c502db",
"text": "Millimeter-wave reconfigurable antennas are predicted as a future of next generation wireless networks with the availability of wide bandwidth. A coplanar waveguide (CPW) fed T-shaped frequency reconfigurable millimeter-wave antenna for 5G networks is presented. The resonant frequency is varied to obtain the 10dB return loss bandwidth in the frequency range of 23-29GHz by incorporating two variable resistors. The radiation pattern contributes two symmetrical radiation beams at approximately ±30o along the end fire direction. The 3dB beamwidth remains conserved over the entire range of operating bandwidth. The proposed antenna targets the applications of wireless systems operating in narrow passages, corridors, mine tunnels, and person-to-person body centric applications.",
"title": ""
},
{
"docid": "c24427c9c600fa16477f22f64ed27475",
"text": "The growing problem of unsolicited bulk e-mail, also known as “spam”, has generated a need for reliable anti-spam e-mail filters. Filters of this type have so far been based mostly on manually constructed keyword patterns. An alternative approach has recently been proposed, whereby a Naive Bayesian classifier is trained automatically to detect spam messages. We test this approach on a large collection of personal e-mail messages, which we make publicly available in “encrypted” form contributing towards standard benchmarks. We introduce appropriate cost-sensitive measures, investigating at the same time the effect of attribute-set size, training-corpus size, lemmatization, and stop lists, issues that have not been explored in previous experiments. Finally, the Naive Bayesian filter is compared, in terms of performance, to a filter that uses keyword patterns, and which is part of a widely used e-mail reader.",
"title": ""
},
{
"docid": "2472a20493c3319cdc87057cc3d70278",
"text": "Traffic flow prediction is an essential function of traffic information systems. Conventional approaches, using artificial neural networks with narrow network architecture and poor training samples for supervised learning, have been only partially successful. In this paper, a deep-learning neural-network based on TensorFlow™ is suggested for the prediction traffic flow conditions, using real-time traffic data. Until now, no research has applied the TensorFlow™ deep learning neural network model to the estimation of traffic conditions. The suggested supervised model is trained by a deep learning algorithm, which uses real traffic data aggregated every five minutes. Results demonstrate that the model's accuracy rate is around 99%.",
"title": ""
},
{
"docid": "fb37da1dc9d95501e08d0a29623acdab",
"text": "This study evaluates various evolutionary search methods to direct neural controller evolution in company with policy (behavior) transfer across increasingly complex collective robotic (RoboCup keep-away) tasks. Robot behaviors are first evolved in a source task and then transferred for further evolution to more complex target tasks. Evolutionary search methods tested include objective-based search (fitness function), behavioral and genotypic diversity maintenance, and hybrids of such diversity maintenance and objective-based search. Evolved behavior quality is evaluated according to effectiveness and efficiency. Effectiveness is the average task performance of transferred and evolved behaviors, where task performance is the average time the ball is controlled by a keeper team. Efficiency is the average number of generations taken for the fittest evolved behaviors to reach a minimum task performance threshold given policy transfer. Results indicate that policy transfer coupled with hybridized evolution (behavioral diversity maintenance and objective-based search) addresses the bootstrapping problem for increasingly complex keep-away tasks. That is, this hybrid method (coupled with policy transfer) evolves behaviors that could not otherwise be evolved. Also, this hybrid evolutionary search was demonstrated as consistently evolving topologically simple neural controllers that elicited high-quality behaviors.",
"title": ""
},
{
"docid": "72c917a9f42d04cae9e03a31e0728555",
"text": "We extend Fano’s inequality, which controls the average probability of events in terms of the average of some f–divergences, to work with arbitrary events (not necessarily forming a partition) and even with arbitrary [0, 1]–valued random variables, possibly in continuously infinite number. We provide two applications of these extensions, in which the consideration of random variables is particularly handy: we offer new and elegant proofs for existing lower bounds, on Bayesian posterior concentration (minimax or distribution-dependent) rates and on the regret in non-stochastic sequential learning. MSC 2000 subject classifications. Primary-62B10; secondary-62F15, 68T05.",
"title": ""
},
{
"docid": "92e150f30ae9ef371ffdd7160c84719d",
"text": "Categorization is a vitally important skill that people use every day. Early theories of category learning assumed a single learning system, but recent evidence suggests that human category learning may depend on many of the major memory systems that have been hypothesized by memory researchers. As different memory systems flourish under different conditions, an understanding of how categorization uses available memory systems will improve our understanding of a basic human skill, lead to better insights into the cognitive changes that result from a variety of neurological disorders, and suggest improvements in training procedures for complex categorization tasks.",
"title": ""
},
{
"docid": "ca20d27b1e6bfd1f827f967473d8bbdd",
"text": "We propose a simple yet effective detector for pedestrian detection. The basic idea is to incorporate common sense and everyday knowledge into the design of simple and computationally efficient features. As pedestrians usually appear up-right in image or video data, the problem of pedestrian detection is considerably simpler than general purpose people detection. We therefore employ a statistical model of the up-right human body where the head, the upper body, and the lower body are treated as three distinct components. Our main contribution is to systematically design a pool of rectangular templates that are tailored to this shape model. As we incorporate different kinds of low-level measurements, the resulting multi-modal & multi-channel Haar-like features represent characteristic differences between parts of the human body yet are robust against variations in clothing or environmental settings. Our approach avoids exhaustive searches over all possible configurations of rectangle features and neither relies on random sampling. It thus marks a middle ground among recently published techniques and yields efficient low-dimensional yet highly discriminative features. Experimental results on the INRIA and Caltech pedestrian datasets show that our detector reaches state-of-the-art performance at low computational costs and that our features are robust against occlusions.",
"title": ""
},
{
"docid": "8eafcf061e2b9cda4cd02de9bf9a31d1",
"text": "Building upon recent Deep Neural Network architectures, current approaches lying in the intersection of Computer Vision and Natural Language Processing have achieved unprecedented breakthroughs in tasks like automatic captioning or image retrieval. Most of these learning methods, though, rely on large training sets of images associated with human annotations that specifically describe the visual content. In this paper we propose to go a step further and explore the more complex cases where textual descriptions are loosely related to the images. We focus on the particular domain of news articles in which the textual content often expresses connotative and ambiguous relations that are only suggested but not directly inferred from images. We introduce an adaptive CNN architecture that shares most of the structure for multiple tasks including source detection, article illustration and geolocation of articles. Deep Canonical Correlation Analysis is deployed for article illustration, and a new loss function based on Great Circle Distance is proposed for geolocation. Furthermore, we present BreakingNews, a novel dataset with approximately 100K news articles including images, text and captions, and enriched with heterogeneous meta-data (such as GPS coordinates and user comments). We show this dataset to be appropriate to explore all aforementioned problems, for which we provide a baseline performance using various Deep Learning architectures, and different representations of the textual and visual features. We report very promising results and bring to light several limitations of current state-of-the-art in this kind of domain, which we hope will help spur progress in the field.",
"title": ""
},
{
"docid": "f1b99496d9cdbeede7402738c50db135",
"text": "Recommender systems base their operation on past user ratings over a collection of items, for instance, books, CDs, etc. Collaborative filtering (CF) is a successful recommendation technique that confronts the ‘‘information overload’’ problem. Memory-based algorithms recommend according to the preferences of nearest neighbors, and model-based algorithms recommend by first developing a model of user ratings. In this paper, we bring to surface factors that affect CF process in order to identify existing false beliefs. In terms of accuracy, by being able to view the ‘‘big picture’’, we propose new approaches that substantially improve the performance of CF algorithms. For instance, we obtain more than 40% increase in precision in comparison to widely-used CF algorithms. In terms of efficiency, we propose a model-based approach based on latent semantic indexing (LSI), that reduces execution times at least 50% than the classic",
"title": ""
},
{
"docid": "7209d813d1a47ac8d2f8f19f4239b8b4",
"text": "We conducted two pilot studies to select the appropriate e-commerce website type and contents for the homepage stimuli. The purpose of Pilot Study 1 was to select a website category with which subjects are not familiar, for which they show neither liking nor disliking, but have some interests in browsing. Unfamiliarity with the website was required because familiarity with a certain category of website may influence perceived complexity of (Radocy and Boyle 1988) and liking for the webpage stimuli (Bornstein 1989; Zajonc 2000). We needed a website for which subjects showed neither liking nor disliking so that the manipulation of webpage stimuli in the experiment could be assumed to be the major influence on their reported emotional responses and approach tendencies. To have some degree of interest in browsing the website is necessary for subjects to engage in experiential web-browsing activities with the webpage stimuli. Based on the results of Pilot Study 1, we selected the gifts website as the context for the experimental stimuli. Then, we conducted Pilot Study 2 to identify appropriate gift items to be included in the webpage stimuli. Thirteen gift items, which were shown to elicit neutral affect in the subjects and to be of some interest to the subjects for browsing or purchase, were selected for the website.",
"title": ""
},
{
"docid": "9e6f69cb83422d756909104f2c1c8887",
"text": "We introduce a novel method for approximate alignment of point-based surfaces. Our approach is based on detecting a set of salient feature points using a scale-space representation. For each feature point we compute a signature vector that is approximately invariant under rigid transformations. We use the extracted signed feature set in order to obtain approximate alignment of two surfaces. We apply our method for the automatic alignment of multiple scans using both scan-to-scan and scan-to-model matching capabilities.",
"title": ""
},
{
"docid": "cc4458a843a2a6ffa86b4efd1956ffca",
"text": "There is a growing interest in the use of chronic deep brain stimulation (DBS) for the treatment of medically refractory movement disorders and other neurological and psychiatric conditions. Fundamental questions remain about the physiologic effects and safety of DBS. Previous basic research studies have focused on the direct polarization of neuronal membranes by electrical stimulation. The goal of this paper is to provide information on the thermal effects of DBS using finite element models to investigate the magnitude and spatial distribution of DBS induced temperature changes. The parameters investigated include: stimulation waveform, lead selection, brain tissue electrical and thermal conductivity, blood perfusion, metabolic heat generation during the stimulation. Our results show that clinical deep brain stimulation protocols will increase the temperature of surrounding tissue by up to 0.8degC depending on stimulation/tissue parameters",
"title": ""
}
] |
scidocsrr
|
401e8b2df6d66df0938f45ec1b580aba
|
A clickstream-based collaborative filtering personalization model: towards a better performance
|
[
{
"docid": "0dd78cb46f6d2ddc475fd887a0dc687c",
"text": "Predicting items a user would like on the basis of other users’ ratings for these items has become a well-established strategy adopted by many recommendation services on the Internet. Although this can be seen as a classification problem, algorithms proposed thus far do not draw on results from the machine learning literature. We propose a representation for collaborative filtering tasks that allows the application of virtually any machine learning algorithm. We identify the shortcomings of current collaborative filtering techniques and propose the use of learning algorithms paired with feature extraction techniques that specifically address the limitations of previous approaches. Our best-performing algorithm is based on the singular value decomposition of an initial matrix of user ratings, exploiting latent structure that essentially eliminates the need for users to rate common items in order to become predictors for one another's preferences. We evaluate the proposed algorithm on a large database of user ratings for motion pictures and find that our approach significantly outperforms current collaborative filtering algorithms.",
"title": ""
},
{
"docid": "ef2ed85c9a25a549aa7082b18242a120",
"text": "Markov models have been extensively used to model Web users' navigation behaviors on Web sites. The link structure of a Web site can be seen as a citation network. By applying bibliographic co-citation and coupling analysis to a Markov model constructed from a Web log file on a Web site, we propose a clustering algorithm called CitationCluster to cluster conceptually related pages. The clustering results are used to construct a conceptual hierarchy of the Web site. Markov model based link prediction is integrated with the hierarchy to assist users' navigation on the Web site.",
"title": ""
}
] |
[
{
"docid": "d36a69538293e384d64c905c678f4944",
"text": "Many studies have investigated factors that affect susceptibility to false memories. However, few have investigated the role of sleep deprivation in the formation of false memories, despite overwhelming evidence that sleep deprivation impairs cognitive function. We examined the relationship between self-reported sleep duration and false memories and the effect of 24 hr of total sleep deprivation on susceptibility to false memories. We found that under certain conditions, sleep deprivation can increase the risk of developing false memories. Specifically, sleep deprivation increased false memories in a misinformation task when participants were sleep deprived during event encoding, but did not have a significant effect when the deprivation occurred after event encoding. These experiments are the first to investigate the effect of sleep deprivation on susceptibility to false memories, which can have dire consequences.",
"title": ""
},
{
"docid": "86ededf9b452bbc51117f5a117247b51",
"text": "An approach to high field control, particularly in the areas near the high voltage (HV) and ground terminals of an outdoor insulator, is proposed using a nonlinear grading material; Zinc Oxide (ZnO) microvaristors compounded with other polymeric materials to obtain the required properties and allow easy application. The electrical properties of the microvaristor compounds are characterised by a nonlinear field-dependent conductivity. This paper describes the principles of the proposed field-control solution and demonstrates the effectiveness of the proposed approach in controlling the electric field along insulator profiles. A case study is carried out for a typical 11 kV polymeric insulator design to highlight the merits of the grading approach. Analysis of electric potential and field distributions on the insulator surface is described under dry clean and uniformly contaminated surface conditions for both standard and microvaristor-graded insulators. The grading and optimisation principles to allow better performance are investigated to improve the performance of the insulator both under steady state operation and under surge conditions. Furthermore, the dissipated power and associated heat are derived to examine surface heating and losses in the grading regions and for the complete insulator. Preliminary tests on inhouse prototype insulators have confirmed better flashover performance of the proposed graded insulator with a 21 % increase in flashover voltage.",
"title": ""
},
{
"docid": "67e85e8b59ec7dc8b0019afa8270e861",
"text": "Machine learning’s ability to rapidly evolve to changing and complex situations has helped it become a fundamental tool for computer security. That adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine learning systems. We show how these classes influence the costs for the attacker and defender, and we give a formal structure defining their interaction. We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing how it can guide attacks against SpamBayes, a popular statistical spam filter. Finally, we discuss how our taxonomy suggests new lines of defenses.",
"title": ""
},
{
"docid": "74dd6f8fbc0469757d00e95b0aeeab65",
"text": "To date, no short scale exists with strong psychometric properties that can assess problematic pornography consumption based on an overarching theoretical background. The goal of the present study was to develop a brief scale, the Problematic Pornography Consumption Scale (PPCS), based on Griffiths's (2005) six-component addiction model that can distinguish between nonproblematic and problematic pornography use. The PPCS was developed using an online sample of 772 respondents (390 females, 382 males; Mage = 22.56, SD = 4.98 years). Creation of items was based on previous problematic pornography use instruments and on the definitions of factors in Griffiths's model. A confirmatory factor analysis (CFA) was carried out-because the scale is based on a well-established theoretical model-leading to an 18-item second-order factor structure. The reliability of the PPCS was excellent, and measurement invariance was established. In the current sample, 3.6% of the users belonged to the at-risk group. Based on sensitivity and specificity analyses, we identified an optimal cutoff to distinguish between problematic and nonproblematic pornography users. The PPCS is a multidimensional scale of problematic pornography use with a strong theoretical basis that also has strong psychometric properties in terms of factor structure and reliability.",
"title": ""
},
{
"docid": "fbc47f2d625755bda6d9aa37805b69f1",
"text": "In many surveillance applications it is desirable to determine if a given individual has been previously observed over a network of cameras. This is the person reidentification problem. This paper focuses on reidentification algorithms that use the overall appearance of an individual as opposed to passive biometrics such as face and gait. Person reidentification approaches have two aspects: (i) establish correspondence between parts, and (ii) generate signatures that are invariant to variations in illumination, pose, and the dynamic appearance of clothing. A novel spatiotemporal segmentation algorithm is employed to generate salient edgels that are robust to changes in appearance of clothing. The invariant signatures are generated by combining normalized color and salient edgel histograms. Two approaches are proposed to generate correspondences: (i) a model based approach that fits an articulated model to each individual to establish a correspondence map, and (ii) an interest point operator approach that nominates a large number of potential correspondences which are evaluated using a region growing scheme. Finally, the approaches are evaluated on a 44 person database across 3 disparate views.",
"title": ""
},
{
"docid": "e38f29a603fb23544ea2fcae04eb1b5d",
"text": "Provenance refers to the entire amount of information, comprising all the elements and their relationships, that contribute to the existence of a piece of data. The knowledge of provenance data allows a great number of benefits such as verifying a product, result reproductivity, sharing and reuse of knowledge, or assessing data quality and validity. With such tangible benefits, it is no wonder that in recent years, research on provenance has grown exponentially, and has been applied to a wide range of different scientific disciplines. Some years ago, managing and recording provenance information were performed manually. Given the huge volume of information available nowadays, the manual performance of such tasks is no longer an option. The problem of systematically performing tasks such as the understanding, capture and management of provenance has gained significant attention by the research community and industry over the past decades. As a consequence, there has been a huge amount of contributions and proposed provenance systems as solutions for performing such kinds of tasks. The overall objective of this paper is to plot the landscape of published systems in the field of provenance, with two main purposes. First, we seek to evaluate the desired characteristics that provenance systems are expected to have. Second, we aim at identifying a set of representative systems (both early and recent use) to be exhaustively analyzed according to such characteristics. In particular, we have performed a systematic literature review of studies, identifying a comprehensive set of 105 relevant resources in all. The results show that there are common aspects or characteristics of provenance systems thoroughly renowned throughout the literature on the topic. Based on these results, we have defined a six-dimensional taxonomy of provenance characteristics attending to: general aspects, data capture, data access, subject, storage, and non-functional aspects. Additionally, the study has found that there are 25 most referenced provenance systems within the provenance context. This study exhaustively analyzes and compares such systems attending to our taxonomy and pinpoints future directions.",
"title": ""
},
{
"docid": "ad6d21a36cc5500e4d8449525eae25ca",
"text": "Human Activity Recognition is one of the attractive topics to develop smart interactive environment in which computing systems can understand human activities in natural context. Besides traditional approaches with visual data, inertial sensors in wearable devices provide a promising approach for human activity recognition. In this paper, we propose novel methods to recognize human activities from raw data captured from inertial sensors using convolutional neural networks with either 2D or 3D filters. We also take advantage of hand-crafted features to combine with learned features from Convolution-Pooling blocks to further improve accuracy for activity recognition. Experiments on UCI Human Activity Recognition dataset with six different activities demonstrate that our method can achieve 96.95%, higher than existing methods.",
"title": ""
},
{
"docid": "793a1a5ff7b7d2c7fa65ce1eaa65b0c0",
"text": "In this paper we describe our implementation of algorithms for face detection and recognition in color images under Matlab. For face detection, we trained a feedforward neural network to perform skin segmentation, followed by the eyes detection, face alignment, lips detection and face delimitation. The eyes were detected by analyzing the chrominance and the angle between neighboring pixels and, then, the results were used to perform face alignment. The lips were detected based on the analysis of the Red color component intensity in the lower face region. Finally, the faces were delimited using the eyes and lips positions. The face recognition involved a classifier that used the standard deviation of the difference between color matrices of the faces to identify the input face. The algorithms were run on Faces 1999 dataset. The proposed method achieved 96.9%, 89% and 94% correct detection rate of face, eyes and lips, respectively. The correctness rate of the face recognition algorithm was 70.7%.",
"title": ""
},
{
"docid": "02bae85905793e75950acbe2adcc6a7b",
"text": "The olfactory system is an essential part of human physiology, with a rich evolutionary history. Although humans are less dependent on chemosensory input than are other mammals (Niimura 2009, Hum. Genomics 4:107-118), olfactory function still plays a critical role in health and behavior. The detection of hazards in the environment, generating feelings of pleasure, promoting adequate nutrition, influencing sexuality, and maintenance of mood are described roles of the olfactory system, while other novel functions are being elucidated. A growing body of evidence has implicated a role for olfaction in such diverse physiologic processes as kin recognition and mating (Jacob et al. 2002a, Nat. Genet. 30:175-179; Horth 2007, Genomics 90:159-175; Havlicek and Roberts 2009, Psychoneuroendocrinology 34:497-512), pheromone detection (Jacob et al. 200b, Horm. Behav. 42:274-283; Wyart et al. 2007, J. Neurosci. 27:1261-1265), mother-infant bonding (Doucet et al. 2009, PLoS One 4:e7579), food preferences (Mennella et al. 2001, Pediatrics 107:E88), central nervous system physiology (Welge-Lüssen 2009, B-ENT 5:129-132), and even longevity (Murphy 2009, JAMA 288:2307-2312). The olfactory system, although phylogenetically ancient, has historically received less attention than other special senses, perhaps due to challenges related to its study in humans. In this article, we review the anatomic pathways of olfaction, from peripheral nasal airflow leading to odorant detection, to epithelial recognition of these odorants and related signal transduction, and finally to central processing. Olfactory dysfunction, which can be defined as conductive, sensorineural, or central (typically related to neurodegenerative disorders), is a clinically significant problem, with a high burden on quality of life that is likely to grow in prevalence due to demographic shifts and increased environmental exposures.",
"title": ""
},
{
"docid": "f5a7a4729f9374ee7bee4401475647f9",
"text": "In the last decade, deep learning has contributed to advances in a wide range computer vision tasks including texture analysis. This paper explores a new approach for texture segmentation using deep convolutional neural networks, sharing important ideas with classic filter bank based texture segmentation methods. Several methods are developed to train Fully Convolutional Networks to segment textures in various applications. We show in particular that these networks can learn to recognize and segment a type of texture, e.g. wood and grass from texture recognition datasets (no training segmentation). We demonstrate that Fully Convolutional Networks can learn from repetitive patterns to segment a particular texture from a single image or even a part of an image. We take advantage of these findings to develop a method that is evaluated on a series of supervised and unsupervised experiments and improve the state of the art on the Prague texture segmentation datasets.",
"title": ""
},
{
"docid": "c1305b1ccc199126a52c6a2b038e24d1",
"text": "This study has devoted much effort to developing an integrated model designed to predict and explain an individual’s continued use of online services based on the concepts of the expectation disconfirmation model and the theory of planned behavior. Empirical data was collected from a field survey of Cyber University System (CUS) users to verify the fit of the hypothetical model. The measurement model indicates the theoretical constructs have adequate reliability and validity while the structured equation model is illustrated as having a high model fit for empirical data. Study’s findings show that a customer’s behavioral intention towards e-service continuance is mainly determined by customer satisfaction and additionally affected by perceived usefulness and subjective norm. Generally speaking, the integrated model can fully reflect the spirit of the expectation disconfirmation model and take advantage of planned behavior theory. After consideration of the impact of systemic features, personal characteristics, and social influence on customer behavior, the integrated model had a better explanatory advantage than other EDM-based models proposed in prior research. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b6b9e1eaf17f6cdbc9c060e467021811",
"text": "Tumour-associated viruses produce antigens that, on the face of it, are ideal targets for immunotherapy. Unfortunately, these viruses are experts at avoiding or subverting the host immune response. Cervical-cancer-associated human papillomavirus (HPV) has a battery of immune-evasion mechanisms at its disposal that could confound attempts at HPV-directed immunotherapy. Other virally associated human cancers might prove similarly refractive to immuno-intervention unless we learn how to circumvent their strategies for immune evasion.",
"title": ""
},
{
"docid": "b46885c79ece056211faeaa23cbb5c20",
"text": "We have been developing the Network Incident analysis Center for Tactical Emergency Response (nicter), whose objective is to detect and identify propagating malwares. The nicter mainly monitors darknet, a set of unused IP addresses, to observe global trends of network threats, while it captures and analyzes malware executables. By correlating the network threats with analysis results of malware, the nicter identifies the root causes (malwares) of the detected network threats. Through a long-term operation of the nicter for more than five years, we have achieved some key findings that would help us to understand the intentions of attackers and the comprehensive threat landscape of the Internet. With a focus on a well-knwon malware, i. e., W32.Downadup, this paper provides some practical case studies with considerations and consequently we could obtain a threat landscape that more than 60% of attacking hosts observed in our dark-net could be infected by W32.Downadup. As an evaluation, we confirmed that the result of the correlation analysis was correct in a rate of 86.18%.",
"title": ""
},
{
"docid": "88130a65e625f85e527d63a0d2a446d4",
"text": "Test-Driven Development (TDD) is an agile practice that is widely accepted and advocated by most agile methods and methodologists. In this paper, we report on a longitudinal case study of an IBM team who has sustained use of TDD for five years and over ten releases of a Java-implemented product. The team worked from a design and wrote tests incrementally before or while they wrote code and, in the process, developed a significant asset of automated tests. The IBM team realized sustained quality improvement relative to a pre-TDD project and consistently had defect density below industry standards. As a result, our data indicate that the TDD practice can aid in the production of high quality products. This quality improvement would compensate for the moderate perceived productivity losses. Additionally, the use of TDD may decrease the degree to which code complexity increases as software ages.",
"title": ""
},
{
"docid": "11a9d7a218d1293878522252e1f62778",
"text": "This paper presents a wideband circularly polarized millimeter-wave (mmw) antenna design. We introduce a novel 3-D-printed polarizer, which consists of several air and dielectric slabs to transform the polarization of the antenna radiation from linear to circular. The proposed polarizer is placed above a radiating aperture operating at the center frequency of 60 GHz. An electric field, <inline-formula> <tex-math notation=\"LaTeX\">${E}$ </tex-math></inline-formula>, radiated from the aperture generates two components of electric fields, <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula>. After passing through the polarizer, both <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula> fields can be degenerated with an orthogonal phase difference which results in having a wide axial ratio bandwidth. The phase difference between <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula> is determined by the incident angle <inline-formula> <tex-math notation=\"LaTeX\">$\\phi $ </tex-math></inline-formula>, of the polarization of the electric field to the polarizer as well as the thickness, <inline-formula> <tex-math notation=\"LaTeX\">${h}$ </tex-math></inline-formula>, of the dielectric slabs. With the help of the thickness of the polarizer, the directivity of the radiation pattern is increased so as to devote high-gain and wideband characteristics to the antenna. To verify our concept, an intensive parametric study and an experiment were carried out. Three antenna sources, including dipole, patch, and aperture antennas, were investigated with the proposed 3-D-printed polarizer. All measured results agree with the theoretical analysis. The proposed antenna with the polarizer achieves a wide impedance bandwidth of 50% from 45 to 75 GHz for the reflection coefficient less than or equal −10 dB, and yields an overlapped axial ratio bandwidth of 30% from 49 to 67 GHz for the axial ratio ≤ 3 dB. The maximum gain of the antenna reaches to 15 dBic. The proposed methodology of this design can apply to applications related to mmw wireless communication systems. The ultimate goal of this paper is to develop a wideband, high-gain, and low-cost antenna for the mmw frequency band.",
"title": ""
},
{
"docid": "39b7ab83a6a0d75b1ec28c5ff485b98d",
"text": "Video object segmentation is a fundamental step in many advanced vision applications. Most existing algorithms are based on handcrafted features such as HOG, super-pixel segmentation or texturebased techniques, while recently deep features have been found to be more efficient. Existing algorithms observe performance degradation in the presence of challenges such as illumination variations, shadows, and color camouflage. To handle these challenges we propose a fusion based moving object segmentation algorithm which exploits color as well as depth information using GAN to achieve more accuracy. Our goal is to segment moving objects in the presence of challenging background scenes, in real environments. To address this problem, GAN is trained in an unsupervised manner on color and depth information independently with challenging video sequences. During testing, the trained GAN generates backgrounds similar to that in the test sample. The generated background samples are then compared with the test sample to segment moving objects. The final result is computed by fusion of object boundaries in both modalities, RGB and the depth. The comparison of our proposed algorithm with five state-of-the-art methods on publicly available dataset has shown the strength of our algorithm for moving object segmentation in videos in the presence of challenging real scenarios.",
"title": ""
},
{
"docid": "0b6ce2e4f3ef7f747f38068adef3da54",
"text": "Network throughput can be increased by allowing multipath, adaptive routing. Adaptive routing allows more freedom in the paths taken by messages, spreading load over physical channels more evenly. The flexibility of adaptive routing introduces new possibilities of deadlock. Previous deadlock avoidance schemes in k-ary n-cubes require an exponential number of virtual channels, independent of network size and dimension. Planar adaptive routing algorithms reduce the complexity of deadlock prevention by reducing the number of choices at each routing step. In the fault-free case, planar-adaptive networks are guaranteed to be deadlock-free. In the presence of network faults, the planar-adaptive router can be extended with misrouting to produce a working network which remains provably deadlock free and is provably livelock free. In addition, planar adaptive networks can simultaneously support both in-order and adaptive, out-of-order packet delivery.\nPlanar-adaptive routing is of practical significance. It provides the simplest known support for deadlock-free adaptive routing in k-ary n-cubes of more than two dimensions (with k > 2). Restricting adaptivity reduces the hardware complexity, improving router speed or allowing additional performance-enhancing network features. The structure of planar-adaptive routers is amenable to efficient implementation.",
"title": ""
},
{
"docid": "7d7d8d521cc098a7672cbe2e387dde58",
"text": "AIM\nThe purpose of this review is to represent acids that can be used as surface etchant before adhesive luting of ceramic restorations, placement of orthodontic brackets or repair of chipped porcelain restorations. Chemical reactions, application protocol, and etching effect are presented as well.\n\n\nSTUDY SELECTION\nAvailable scientific articles published in PubMed and Scopus literature databases, scientific reports and manufacturers' instructions and product information from internet websites, written in English, using following search terms: \"acid etching, ceramic surface treatment, hydrofluoric acid, acidulated phosphate fluoride, ammonium hydrogen bifluoride\", have been reviewed.\n\n\nRESULTS\nThere are several acids with fluoride ion in their composition that can be used as ceramic surface etchants. The etching effect depends on the acid type and its concentration, etching time, as well as ceramic type. The most effective etching pattern is achieved when using hydrofluoric acid; the numerous micropores and channels of different sizes, honeycomb-like appearance, extruded crystals or scattered irregular ceramic particles, depending on the ceramic type, have been detected on the etched surfaces.\n\n\nCONCLUSION\nAcid etching of the bonding surface of glass - ceramic restorations is considered as the most effective treatment method that provides a reliable bond with composite cement. Selective removing of the glassy matrix of silicate ceramics results in a micromorphological three-dimensional porous surface that allows micromechanical interlocking of the luting composite.",
"title": ""
},
{
"docid": "efe8e9759d3132e2a012098d41a05580",
"text": "A formalism is presented for computing and organizing actions for autonomous agents in dynamic environments. We introduce the notion of teleo-reactive (T-R) programs whose execution entails the construction of circuitry for the continuous computation of the parameters and conditions on which agent action is based. In addition to continuous feedback, T-R programs support parameter binding and recursion. A primary di erence between T-R programs and many other circuit-based systems is that the circuitry of T-R programs is more compact; it is constructed at run time and thus does not have to anticipate all the contingencies that might arise over all possible runs. In addition, T-R programs are intuitive and easy to write and are written in a form that is compatible with automatic planning and learning methods. We brie y describe some experimental applications of T-R programs in the control of simulated and actual mobile robots.",
"title": ""
},
{
"docid": "1be58e70089b58ca3883425d1a46b031",
"text": "In this work, we propose a novel way to consider the clustering and the reduction of the dimension simultaneously. Indeed, our approach takes advantage of the mutual reinforcement between data reduction and clustering tasks. The use of a low-dimensional representation can be of help in providing simpler and more interpretable solutions. We show that by doing so, our model is able to better approximate the relaxed continuous dimension reduction solution by the true discrete clustering solution. Experiment results show that our method gives better results in terms of clustering than the state-of-the-art algorithms devoted to similar tasks for data sets with different proprieties.",
"title": ""
}
] |
scidocsrr
|
5a812bb56de310cac6c24d113eaa568c
|
Secure control against replay attacks
|
[
{
"docid": "42d5712d781140edbc6a35703d786e15",
"text": "This paper considers control and estimation problems where the sensor signals and the actuator signals are transmitted to various subsystems over a network. In contrast to traditional control and estimation problems, here the observation and control packets may be lost or delayed. The unreliability of the underlying communication network is modeled stochastically by assigning probabilities to the successful transmission of packets. This requires a novel theory which generalizes classical control/estimation paradigms. The paper offers the foundations of such a novel theory. The central contribution is to characterize the impact of the network reliability on the performance of the feedback loop. Specifically, it is shown that for network protocols where successful transmissions of packets is acknowledged at the receiver (e.g., TCP-like protocols), there exists a critical threshold of network reliability (i.e., critical probabilities for the successful delivery of packets), below which the optimal controller fails to stabilize the system. Further, for these protocols, the separation principle holds and the optimal LQG controller is a linear function of the estimated state. In stark contrast, it is shown that when there is no acknowledgement of successful delivery of control packets (e.g., UDP-like protocols), the LQG optimal controller is in general nonlinear. Consequently, the separation principle does not hold in this circumstance",
"title": ""
},
{
"docid": "223a7496c24dcf121408ac3bba3ad4e5",
"text": "Process control and SCADA systems, with their reliance on proprietary networks and hardware, have long been considered immune to the network attacks that have wreaked so much havoc on corporate information systems. Unfortunately, new research indicates this complacency is misplaced – the move to open standards such as Ethernet, TCP/IP and web technologies is letting hackers take advantage of the control industry’s ignorance. This paper summarizes the incident information collected in the BCIT Industrial Security Incident Database (ISID), describes a number of events that directly impacted process control systems and identifies the lessons that can be learned from these security events.",
"title": ""
},
{
"docid": "f2db57e59a2e7a91a0dff36487be3aa4",
"text": "In this paper we attempt to answer two questions: (1) Why should we be interested in the security of control systems? And (2) What are the new and fundamentally different requirements and problems for the security of control systems? We also propose a new mathematical framework to analyze attacks against control systems. Within this framework we formulate specific research problems to (1) detect attacks, and (2) survive attacks.",
"title": ""
}
] |
[
{
"docid": "ffea50948eab00d47f603d24bcfc1bfd",
"text": "A statistical pattern-recognition technique was applied to the classification of musical instrument tones within a taxonomic hierarchy. Perceptually salient acoustic features— related to the physical properties of source excitation and resonance structure—were measured from the output of an auditory model (the log-lag correlogram) for 1023 isolated tones over the full pitch ranges of 15 orchestral instruments. The data set included examples from the string (bowed and plucked), woodwind (single, double, and air reed), and brass families. Using 70%/30% splits between training and test data, maximum a posteriori classifiers were constructed based on Gaussian models arrived at through Fisher multiplediscriminant analysis. The classifiers distinguished transient from continuant tones with approximately 99% correct performance. Instrument families were identified with approximately 90% performance, and individual instruments were identified with an overall success rate of approximately 70%. These preliminary analyses compare favorably with human performance on the same task and demonstrate the utility of the hierarchical approach to classification.",
"title": ""
},
{
"docid": "d038c7b29701654f8ee908aad395fe8c",
"text": "Vaginal fibroepithelial polyp is a rare lesion, and although benign, it can be confused with malignant connective tissue lesions. Treatment is simple excision, and recurrence is extremely uncommon. We report a case of a newborn with vaginal fibroepithelial polyp. The authors suggest that vaginal polyp must be considered in the evaluation of interlabial masses in prepubertal girls.",
"title": ""
},
{
"docid": "caab24c7af0c58965833d56382132d66",
"text": "Mesh slicing is one of the most common operations in additive manufacturing (AM). However, the computing burden for such an application is usually very heavy, especially when dealing with large models. Nowadays the graphics processing units (GPU) have abundant resources and it is reasonable to utilize the computing power of GPU for mesh slicing. In the paper, we propose a parallel implementation of the slicing algorithm using GPU. We test the GPU-accelerated slicer on several models and obtain a speedup factor of about 30 when dealing with large models, compared with the CPU implementation. Results show the power of GPU on the mesh slicing problem. In the future, we will extend our work and standardize the slicing process.",
"title": ""
},
{
"docid": "e3b1e52066d20e7c92e936cdb72cc32b",
"text": "This paper presents a new approach to power system automation, based on distributed intelligence rather than traditional centralized control. The paper investigates the interplay between two international standards, IEC 61850 and IEC 61499, and proposes a way of combining of the application functions of IEC 61850-compliant devices with IEC 61499-compliant “glue logic,” using the communication services of IEC 61850-7-2. The resulting ability to customize control and automation logic will greatly enhance the flexibility and adaptability of automation systems, speeding progress toward the realization of the smart grid concept.",
"title": ""
},
{
"docid": "94076bd2a4587df2bee9d09e81af2109",
"text": "Public genealogical databases are becoming increasingly populated with historical data and records of the current population's ancestors. As this increasing amount of available information is used to link individuals to their ancestors, the resulting trees become deeper and more dense, which justifies the need for using organized, space-efficient layouts to display the data. Existing layouts are often only able to show a small subset of the data at a time. As a result, it is easy to become lost when navigating through the data or to lose sight of the overall tree structure. On the contrary, leaving space for unknown ancestors allows one to better understand the tree's structure, but leaving this space becomes expensive and allows fewer generations to be displayed at a time. In this work, we propose that the H-tree based layout be used in genealogical software to display ancestral trees. We will show that this layout presents an increase in the number of displayable generations, provides a nicely arranged, symmetrical, intuitive and organized fractal structure, increases the user's ability to understand and navigate through the data, and accounts for the visualization requirements necessary for displaying such trees. Finally, user-study results indicate potential for user acceptance of the new layout.",
"title": ""
},
{
"docid": "3e2c79715d8ae80e952d1aabf03db540",
"text": "Professor Yrjo Paatero, in 1961, first introduced the Orthopantomography (OPG) [1]. It has been extensively used in dentistry for analysing the number and type of teeth present, caries, impacted teeth, root resorption, ankylosis, shape of the condyles [2], temporomandibular joints, sinuses, fractures, cysts, tumours and alveolar bone level [3,4]. Panoramic radiography is advised to all patients seeking orthodontic treatment; including Class I malocclusions [5].",
"title": ""
},
{
"docid": "b53f2f922661bfb14bf2181236fad566",
"text": "In many real world applications of machine learning, the distribution of the training data (on which the machine learning model is trained) is different from the distribution of the test data (where the learnt model is actually deployed). This is known as the problem of Domain Adaptation. We propose a novel deep learning model for domain adaptation which attempts to learn a predictively useful representation of the data by taking into account information from the distribution shift between the training and test data. Our key proposal is to successively learn multiple intermediate representations along an “interpolating path” between the train and test domains. Our experiments on a standard object recognition dataset show a significant performance improvement over the state-of-the-art. 1. Problem Motivation and Context Oftentimes in machine learning applications, we have to learn a model to accomplish a specific task using training data drawn from one distribution (the source domain), and deploy the learnt model on test data drawn from a different distribution (the target domain). For instance, consider the task of creating a mobile phone application for “image search for products”; where the goal is to look up product specifications and comparative shopping options from the internet, given a picture of the product taken with a user’s mobile phone. In this case, the underlying object recognizer will typically be trained on a labeled corpus of images (perhaps scraped from the internet), and tested on the images taken using the user’s phone camera. The challenge here is that the distribution of training and test images is not the same. A naively Appeared in the proceedings of the ICML 2013, Workshop on Representation Learning, Atlanta, Georgia, USA, 2013. trained object recognizer, that is just trained on the training images and applied directly to the test images, cannot be expected to have good performance. Such issues of a mismatched train and test sets occur not only in the field of Computer Vision (Duan et al., 2009; Jain & Learned-Miller, 2011; Wang & Wang, 2011), but also in Natural Language Processing (Blitzer et al., 2006; 2007; Glorot et al., 2011), and Automatic Speech Recognition (Leggetter & Woodland, 1995). The problem of differing train and test data distributions is referred to as Domain Adaptation (Daume & Marcu, 2006; Daume, 2007). Two variations of this problem are commonly discussed in the literature. In the first variation, known as Unsupervised Domain Adaptation, no target domain labels are provided during training. One only has access to source domain labels. In the second version of the problem, called Semi-Supervised Domain Adaptation, besides access to source domain labels, we additionally assume access to a few target domain labels during training. Previous approaches to domain adaptation can broadly be classified into a few main groups. One line of research starts out assuming the input representations are fixed (the features given are not learnable) and seeks to address domain shift by modeling the source/target distributional difference via transformations of the given representation. These transformations lead to a different distance metric which can be used in the domain adaptation classification/regression task. This is the approach taken, for instance, in (Saenko et al., 2010) and the recent linear manifold papers of (Gopalan et al., 2011; Gong et al., 2012). Another set of approaches in this fixed representation view of the problem treats domain adaptation as a conventional semi-supervised learning (Bergamo & Torresani, 2010; Dai et al., 2007; Yang et al., 2007; Duan et al., 2012). These works essentially construct a classifier using the labeled source data, and Often, the number of such labelled target samples are not sufficient to train a robust model using target data alone. DLID: Deep Learning for Domain Adaptation by Interpolating between Domains impose structural constraints on the classifier using unlabeled target data. A second line of research focusses on directly learning the representation of the inputs that is somewhat invariant across domains. Various models have been proposed (Daume, 2007; Daume et al., 2010; Blitzer et al., 2006; 2007; Pan et al., 2009), including deep learning models (Glorot et al., 2011). There are issues with both kinds of the previous proposals. In the fixed representation camp, the type of projection or structural constraint imposed often severely limits the capacity/strength of representations (linear projections for example, are common). In the representation learning camp, existing deep models do not attempt to explicitly encode the distributional shift between the source and target domains. In this paper we propose a novel deep learning model for the problem of domain adaptation which combines ideas from both of the previous approaches. We call our model (DLID): Deep Learning for domain adaptation by Interpolating between Domains. By operating in the deep learning paradigm, we also learn hierarchical non-linear representation of the source and target inputs. However, we explicitly define and use an “interpolating path” between the source and target domains while learning the representation. This interpolating path captures information about structures intermediate to the source and target domains. The resulting representation we obtain is highly rich (containing source to target path information) and allows us to handle the domain adaptation task extremely well. There are multiple benefits to our approach compared to those proposed in the literature. First, we are able to train intricate non-linear representations of the input, while explicitly modeling the transformation between the source and target domains. Second, instead of learning a representation which is independent of the final task, our model can learn representations with information from the final classification/regression task. This is achieved by fine-tuning the pre-trained intermediate feature extractors using feedback from the final task. Finally, our approach can gracefully handle additional training data being made available in the future. We would simply fine-tune our model with the new data, as opposed to having to retrain the entire model again from scratch. We evaluate our model on the domain adaptation problem of object recognition on a standard dataset (Saenko et al., 2010). Empirical results show that our model out-performs the state of the art by a significant margin. In some cases there is an improvement of over 40% from the best previously reported results. An analysis of the learnt representations sheds some light onto the properties that result in such excellent performance (Ben-David et al., 2007). 2. An Overview of DLID At a high level, the DLID model is a deep neural network model designed specifically for the problem of domain adaptation. Deep networks have had tremendous success recently, achieving state-of-the-art performance on a number of machine learning tasks (Bengio, 2009). In large part, their success can be attributed to their ability to learn extremely powerful hierarchical non-linear representations of the inputs. In particular, breakthroughs in unsupervised pre-training (Bengio et al., 2006; Hinton et al., 2006; Hinton & Salakhutdinov, 2006; Ranzato et al., 2006), have been critical in enabling deep networks to be trained robustly. As with other deep neural network models, DLID also learns its representation using unsupervised pretraining. The key difference is that in DLID model, we explicitly capture information from an “interpolating path” between the source domain and the target domain. As mentioned in the introduction, our interpolating path is motivated by the ideas discussed in Gopalan et al. (2011); Gong et al. (2012). In these works, the original high dimensional features are linearly projected (typically via PCA/PLS) to a lower dimensional space. Because these are linear projections, the source and target lower dimensional subspaces lie on the Grassman manifold. Geometric properties of the manifold, like shortest paths (geodesics), present an interesting and principled way to transition/interpolate smoothly between the source and target subspaces. It is this path information on the manifold that is used by Gopalan et al. (2011); Gong et al. (2012) to construct more robust and accurate classifiers for the domain adaptation task. In DLID, we define a somewhat different notion of an interpolating path between source and target domains, but appeal to a similar intuition. Figure 1 shows an illustration of our model. Let the set of data samples for the source domain S be denoted by DS , and that of the target domain T be denoted by DT . Starting with all the source data samples DS , we generate intermediate sampled datasets, where for each successive dataset we gradually increase the proportion of samples randomly drawn from DT , and decrease the proportion of samples drawn from DS . In particular, let p ∈ [1, . . . , P ] be an index over the P datasets we generate. Then we have Dp = DS for p = 1, Dp = DT for p = P . For p ∈ [2, . . . , P − 1], datasets Dp and Dp+1 are created in a way so that the proportion of samples from DT in Dp is less than in Dp+1. Each of these data sets can be thought of as a single point on a particular kind of interpolating path between S and T . DLID: Deep Learning for Domain Adaptation by Interpolating between Domains",
"title": ""
},
{
"docid": "c36986dd83276fe01e73a4125f99f7c0",
"text": "A new population-based search algorithm called the Bees Algorithm (BA) is presented in this paper. The algorithm mimics the food foraging behavior of swarms of honey bees. This algorithm performs a kind of neighborhood search combined with random search and can be used for both combinatorial optimization and functional optimization and with good numerical optimization results. ABC is a meta-heuristic optimization technique inspired by the intelligent foraging behavior of honeybee swarms. This paper demonstrates the efficiency and robustness of the ABC algorithm to solve MDVRP (Multiple depot vehicle routing problems). KeywordsSwarm intelligence, ant colony optimization, Genetic Algorithm, Particle Swarm optimization, Artificial Bee Colony optimization.",
"title": ""
},
{
"docid": "b9dfc489ff1bf6907929a450ea614d0b",
"text": "Internet of things (IoT) is going to be ubiquitous in the next few years. In the smart city initiative, millions of sensors will be deployed for the implementation of IoT related services. Even in the normal cellular architecture, IoT will be deployed as a value added service for several new applications. Such massive deployment of IoT sensors and devices would certainly cost a large sum of money. In addition to the cost of deployment, the running costs or the operational expenditure of the IoT networks will incur huge power bills and spectrum license charges. As IoT is going to be a pervasive technology, its sustainability and environmental effects too are important. Energy efficiency and overall resource optimization would make it the long term technology of the future. Therefore, green IoT is essential for the operators and the long term sustainability of IoT itself. In this article we consider the green initiatives being worked out for IoT. We also show that narrowband IoT as the greener version right now.",
"title": ""
},
{
"docid": "16816ba52854f6242701b27fdd0263fe",
"text": "The economic viability of colonizing Mars is examined. It is shown, that of all bodies in the solar system other than Earth, Mars is unique in that it has the resources required to support a population of sufficient size to create locally a new branch of human civilization. It is also shown that while Mars may lack any cash material directly exportable to Earth, Mars’ orbital elements and other physical parameters gives a unique positional advantage that will allow it to act as a keystone supporting extractive activities in the asteroid belt and elsewhere in the solar system. The potential of relatively near-term types of interplanetary transportation systems is examined, and it is shown that with very modest advances on a historical scale, systems can be put in place that will allow individuals and families to emigrate to Mars at their own discretion. Their motives for doing so will parallel in many ways the historical motives for Europeans and others to come to America, including higher pay rates in a labor-short economy, escape from tradition and oppression, as well as freedom to exercise their drive to create in an untamed and undefined world. Under conditions of such large scale immigration, sale of real-estate will add a significant source of income to the planet’s economy. Potential increases in real-estate values after terraforming will provide a sufficient financial incentive to do so. In analogy to frontier America, social conditions on Mars will make it a pressure cooker for invention. These inventions, licensed on Earth, will raise both Terrestrial and Martian living standards and contribute large amounts of income to support the development of the colony.",
"title": ""
},
{
"docid": "65dbd6cfc76d7a81eaa8a1dd49a838bb",
"text": "Organizations are attempting to leverage their knowledge resources by employing knowledge management (KM) systems, a key form of which are electronic knowledge repositories (EKRs). A large number of KM initiatives fail due to reluctance of employees to share knowledge through these systems. Motivated by such concerns, this study formulates and tests a theoretical model to explain EKR usage by knowledge contributors. The model employs social exchange theory to identify cost and benefit factors affecting EKR usage, and social capital theory to account for the moderating influence of contextual factors. The model is validated through a large-scale survey of public sector organizations. The results reveal that knowledge self-efficacy and enjoyment in helping others significantly impact EKR usage by knowledge contributors. Contextual factors (generalized trust, pro-sharing norms, and identification) moderate the impact of codification effort, reciprocity, and organizational reward on EKR usage, respectively. It can be seen that extrinsic benefits (reciprocity and organizational reward) impact EKR usage contingent on particular contextual factors whereas the effects of intrinsic benefits (knowledge self-efficacy and enjoyment in helping others) on EKR usage are not moderated by contextual factors. The loss of knowledge power and image do not appear to impact EKR usage by knowledge contributors. Besides contributing to theory building in KM, the results of this study inform KM practice.",
"title": ""
},
{
"docid": "6f3a902ed5871a95f6b5adf197684748",
"text": "BACKGROUND\nThe choice of antimicrobials for initial treatment of peritoneal dialysis (PD)-related peritonitis is crucial for a favorable outcome. There is no consensus about the best therapy; few prospective controlled studies have been published, and the only published systematic reviews did not report superiority of any class of antimicrobials. The objective of this review was to analyze the results of PD peritonitis treatment in adult patients by employing a new methodology, the proportional meta-analysis.\n\n\nMETHODS\nA review of the literature was conducted. There was no language restriction. Studies were obtained from MEDLINE, EMBASE, and LILACS. The inclusion criteria were: (a) case series and RCTs with the number of reported patients in each study greater than five, (b) use of any antibiotic therapy for initial treatment (e.g., cefazolin plus gentamicin or vancomycin plus gentamicin), for Gram-positive (e.g., vancomycin or a first generation cephalosporin), or for Gram-negative rods (e.g., gentamicin, ceftazidime, and fluoroquinolone), (c) patients with PD-related peritonitis, and (d) studies specifying the rates of resolution. A proportional meta-analysis was performed on outcomes using a random-effects model, and the pooled resolution rates were calculated.\n\n\nRESULTS\nA total of 64 studies (32 for initial treatment and negative culture, 28 reporting treatment for Gram-positive rods and 24 reporting treatment for Gram-negative rods) and 21 RCTs met all inclusion criteria (14 for initial treatment and negative culture, 8 reporting treatment for Gram-positive rods and 8 reporting treatment for Gram-negative rods). The pooled resolution rate of ceftazidime plus glycopeptide as initial treatment (pooled proportion = 86% [95% CI 0.82-0.89]) was significantly higher than first generation cephalosporin plus aminoglycosides (pooled proportion = 66% [95% CI 0.57-0.75]) and significantly higher than glycopeptides plus aminoglycosides (pooled proportion = 75% [95% CI 0.69-0.80]. Other comparisons of regimens used for either initial treatment, treatment for Gram-positive rods or Gram-negative rods did not show statistically significant differences.\n\n\nCONCLUSION\nWe showed that the association of a glycopeptide plus ceftazidime is superior to other regimens for initial treatment of PD peritonitis. This result should be carefully analyzed and does not exclude the necessity of monitoring the local microbiologic profile in each dialysis center to choice the initial therapeutic protocol.",
"title": ""
},
{
"docid": "26a6ba8cba43ddfd3cac0c90750bf4ad",
"text": "Mobile applications usually need to be provided for more than one operating system. Developing native apps separately for each platform is a laborious and expensive undertaking. Hence, cross-platform approaches have emerged, most of them based on Web technologies. While these enable developers to use a single code base for all platforms, resulting apps lack a native look & feel. This, however, is often desired by users and businesses. Furthermore, they have a low abstraction level. We propose MD2, an approach for model-driven cross-platform development of apps. With MD2, developers specify an app in a high-level (domain-specific) language designed for describing business apps succinctly. From this model, purely native apps for Android and iOS are automatically generated. MD2 was developed in close cooperation with industry partners and provides means to develop data-driven apps with a native look and feel. Apps can access the device hardware and interact with remote servers.",
"title": ""
},
{
"docid": "b7f15089db3f5d04c1ce1d5f09b0b1f0",
"text": "Despite the flourishing research on the relationships between affect and language, the characteristics of pain-related words, a specific type of negative words, have never been systematically investigated from a psycholinguistic and emotional perspective, despite their psychological relevance. This study offers psycholinguistic, affective, and pain-related norms for words expressing physical and social pain. This may provide a useful tool for the selection of stimulus materials in future studies on negative emotions and/or pain. We explored the relationships between psycholinguistic, affective, and pain-related properties of 512 Italian words (nouns, adjectives, and verbs) conveying physical and social pain by asking 1020 Italian participants to provide ratings of Familiarity, Age of Acquisition, Imageability, Concreteness, Context Availability, Valence, Arousal, Pain-Relatedness, Intensity, and Unpleasantness. We also collected data concerning Length, Written Frequency (Subtlex-IT), N-Size, Orthographic Levenshtein Distance 20, Neighbor Mean Frequency, and Neighbor Maximum Frequency of each word. Interestingly, the words expressing social pain were rated as more negative, arousing, pain-related, and conveying more intense and unpleasant experiences than the words conveying physical pain.",
"title": ""
},
{
"docid": "0f87fefbe2cfc9893b6fc490dd3d40b7",
"text": "With the tremendous amount of textual data available in the Internet, techniques for abstractive text summarization become increasingly appreciated. In this paper, we present work in progress that tackles the problem of multilingual text summarization using semantic representations. Our system is based on abstract linguistic structures obtained from an analysis pipeline of disambiguation, syntactic and semantic parsing tools. The resulting structures are stored in a semantic repository, from which a text planning component produces content plans that go through a multilingual generation pipeline that produces texts in English, Spanish, French, or German. In this paper we focus on the lingusitic components of the summarizer, both analysis and generation.",
"title": ""
},
{
"docid": "031e3f1ae2537b603b4b2119f3dad572",
"text": "Efficient storage and querying of large repositories of RDF content is important due to the widespread growth of Semantic Web and Linked Open Data initiatives. Many novel database systems that store RDF in its native form or within traditional relational storage have demonstrated their ability to scale to large volumes of RDF content. However, it is increasingly becoming obvious that the simple dyadic relationship captured through traditional triples alone is not sufficient for modelling multi-entity relationships, provenance of facts, etc. Such richer models are supported in RDF through two techniques - first, called reification which retains the triple nature of RDF and the second, a non-standard extension called N-Quads. In this paper, we explore the challenges of supporting such richer semantic data by extending the state-of-the-art RDF-3X system. We describe our implementation of RQ-RDF-3X, a reification and quad enhanced RDF-3X, which involved a significant re-engineering ranging from the set of indexes and their compression schemes to the query processing pipeline for queries over reified content. Using large RDF repositories such as YAGO2S and DBpedia, and a set of SPARQL queries that utilize reification model, we demonstrate that RQ-RDF-3X is significantly faster than RDF-3X.",
"title": ""
},
{
"docid": "cea9c1bab28363fc6f225b7843b8df99",
"text": "Published in Agron. J. 104:1336–1347 (2012) Posted online 29 June 2012 doi:10.2134/agronj2012.0065 Copyright © 2012 by the American Society of Agronomy, 5585 Guilford Road, Madison, WI 53711. All rights reserved. No part of this periodical may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. T leaf area index (LAI), the ratio of leaf area to ground area, typically reported as square meters per square meter, is a commonly used biophysical characteristic of vegetation (Watson, 1947). The LAI can be subdivided into photosynthetically active and photosynthetically inactive components. The former, the gLAI, is a metric commonly used in climate (e.g., Buermann et al., 2001), ecological (e.g., Bulcock and Jewitt, 2010), and crop yield (e.g., Fang et al., 2011) models. Because of its wide use and applicability to modeling, there is a need for a nondestructive remote estimation of gLAI across large geographic areas. Various techniques based on remotely sensed data have been utilized for assessing gLAI (see reviews by Pinter et al., 2003; Hatfield et al., 2004, 2008; Doraiswamy et al., 2003; le Maire et al., 2008, and references therein). Vegetation indices, particularly the NDVI (Rouse et al., 1974) and SR (Jordan, 1969), are the most widely used. The NDVI, however, is prone to saturation at moderate to high gLAI values (Kanemasu, 1974; Curran and Steven, 1983; Asrar et al., 1984; Huete et al., 2002; Gitelson, 2004; Wu et al., 2007; González-Sanpedro et al., 2008) and requires reparameterization for different crops and species. The saturation of NDVI has been attributed to insensitivity of reflectance in the red region at moderate to high gLAI values due to the high absorption coefficient of chlorophyll. For gLAI below 3 m2/m2, total absorption by a canopy in the red range reaches 90 to 95%, and further increases in gLAI do not bring additional changes in absorption and reflectance (Hatfield et al., 2008; Gitelson, 2011). Another reason for the decrease in the sensitivity of NDVI to moderate to high gLAI values is the mathematical formulation of that index. At moderate to high gLAI, the NDVI is dominated by nearinfrared (NIR) reflectance. Because scattering by the cellular or leaf structure causes the NIR reflectance to be high and the absorption by chlorophyll causes the red reflectance to be low, NIR reflectance is considerably greater than red reflectance: e.g., for gLAI >3 m2/m2, NIR reflectance is >40% while red reflectance is <5%. Thus, NDVI becomes insensitive to changes in both red and NIR reflectance. Other commonly used VIs include the Enhanced Vegetation Index, EVI (Liu and Huete, 1995; Huete et al., 1997, 2002), its ABStrAct",
"title": ""
},
{
"docid": "2805c831c044d0b4da29bf96e9e7d3ad",
"text": "Combined chromatin immunoprecipitation and next-generation sequencing (ChIP-seq) has enabled genome-wide epigenetic profiling of numerous cell lines and tissue types. A major limitation of ChIP-seq, however, is the large number of cells required to generate high-quality data sets, precluding the study of rare cell populations. Here, we present an ultra-low-input micrococcal nuclease-based native ChIP (ULI-NChIP) and sequencing method to generate genome-wide histone mark profiles with high resolution from as few as 10(3) cells. We demonstrate that ULI-NChIP-seq generates high-quality maps of covalent histone marks from 10(3) to 10(6) embryonic stem cells. Subsequently, we show that ULI-NChIP-seq H3K27me3 profiles generated from E13.5 primordial germ cells isolated from single male and female embryos show high similarity to recent data sets generated using 50-180 × more material. Finally, we identify sexually dimorphic H3K27me3 enrichment at specific genic promoters, thereby illustrating the utility of this method for generating high-quality and -complexity libraries from rare cell populations.",
"title": ""
},
{
"docid": "2da9ad29e0b10a8dc8b01a8faf35bb1a",
"text": "Face recognition is challenge task which involves determining the identity of facial images. With availability of a massive amount of labeled facial images gathered from Internet, deep convolution neural networks(DCNNs) have achieved great success in face recognition tasks. Those images are gathered from unconstrain environment, which contain people with different ethnicity, age, gender and so on. However, in the actual application scenario, the target face database may be gathered under different conditions compered with source training dataset, e.g. different ethnicity, different age distribution, disparate shooting environment. These factors increase domain discrepancy between source training database and target application database which makes the learnt model degenerate in target database. Meanwhile, for the target database where labeled data are lacking or unavailable, directly using target data to fine-tune pre-learnt model becomes intractable and impractical. In this paper, we adopt unsupervised transfer learning methods to address this issue. To alleviate the discrepancy between source and target face database and ensure the generalization ability of the model, we constrain the maximum mean discrepancy (MMD) between source database and target database and utilize the massive amount of labeled facial images of source database to training the deep neural network at the same time. We evaluate our method on two face recognition benchmarks and significantly enhance the performance without utilizing the target label.",
"title": ""
}
] |
scidocsrr
|
42030177956935b1186f5c1568db71de
|
A Quick Startup Technique for High- $Q$ Oscillators Using Precisely Timed Energy Injection
|
[
{
"docid": "b20aa52ea2e49624730f6481a99a8af8",
"text": "A 51.3-MHz 18-<inline-formula><tex-math notation=\"LaTeX\">$\\mu\\text{W}$</tex-math></inline-formula> 21.8-ppm/°C relaxation oscillator is presented in 90-nm CMOS. The proposed oscillator employs an integrated error feedback and composite resistors to minimize its sensitivity to temperature variations. For a temperature range from −20 °C to 100 °C, the fabricated circuit demonstrates a frequency variation less than ±0.13%, leading to an average frequency drift of 21.8 ppm/°C. As the supply voltage changes from 0.8 to 1.2 V, the frequency variation is ±0.53%. The measured rms jitter and phase noise at 1-MHz offset are 89.27 ps and −83.29 dBc/Hz, respectively.",
"title": ""
},
{
"docid": "a822bb33898b1fa06d299fbed09647d4",
"text": "The design of a 1.8 GHz 3-stage current-starved ring oscillator with a process- and temperature- compensated current source is presented. Without post-fabrication calibration or off-chip components, the proposed low variation circuit is able to achieve a 65.1% reduction in the normalized standard deviation of its center frequency at room temperature and 85 ppm/ ° C temperature stability with no penalty in the oscillation frequency, the phase noise or the start-up time. Analysis on the impact of transistor scaling indicates that the same circuit topology can be applied to improve variability as feature size scales beyond the current deep submicron technology. Measurements taken on 167 test chips from two different lots fabricated in a standard 90 nm CMOS process show a 3x improvement in frequency variation compared to the baseline case of a conventional current-starved ring oscillator. The power and area for the proposed circuitry is 87 μW and 0.013 mm2 compared to 54 μ W and 0.01 mm 2 in the baseline case.",
"title": ""
}
] |
[
{
"docid": "16a7142a595da55de7df5253177cbcb5",
"text": "The present study represents the first large-scale, prospective comparison to test whether aging out of foster care contributes to homelessness risk in emerging adulthood. A nationally representative sample of adolescents investigated by the child welfare system in 2008 to 2009 from the second cohort of the National Survey of Child and Adolescent Well-being Study (NSCAW II) reported experiences of housing problems at 18- and 36-month follow-ups. Latent class analyses identified subtypes of housing problems, including literal homelessness, housing instability, and stable housing. Regressions predicted subgroup membership based on aging out experiences, receipt of foster care services, and youth and county characteristics. Youth who reunified after out-of-home placement in adolescence exhibited the lowest probability of literal homelessness, while youth who aged out experienced similar rates of literal homelessness as youth investigated by child welfare but never placed out of home. No differences existed between groups on prevalence of unstable housing. Exposure to independent living services and extended foster care did not relate with homelessness prevention. Findings emphasize the developmental importance of families in promoting housing stability in the transition to adulthood, while questioning child welfare current focus on preparing foster youth to live.",
"title": ""
},
{
"docid": "5277cdcfb9352fa0e8cf08ff723d34c6",
"text": "Extractive style query oriented multi document summariza tion generates the summary by extracting a proper set of sentences from multiple documents based on the pre given query. This paper proposes a novel multi document summa rization framework via deep learning model. This uniform framework consists of three parts: concepts extraction, summary generation, and reconstruction validation, which work together to achieve the largest coverage of the docu ments content. A new query oriented extraction technique is proposed to concentrate distributed information to hidden units layer by layer. Then, the whole deep architecture is fi ne tuned by minimizing the information loss of reconstruc tion validation. According to the concentrated information, dynamic programming is used to seek most informative set of sentences as the summary. Experiments on three bench mark datasets demonstrate the effectiveness of the proposed framework and algorithms.",
"title": ""
},
{
"docid": "fdabfd5f242e433bb1497ea913d67cd2",
"text": "OBJECTIVES\nTo investigate the ability of cerebrospinal fluid (CSF) and plasma measures to discriminate early-stage Alzheimer disease (AD) (defined by clinical criteria and presence/absence of brain amyloid) from nondemented aging and to assess whether these biomarkers can predict future dementia in cognitively normal individuals.\n\n\nDESIGN\nEvaluation of CSF beta-amyloid(40) (Abeta(40)), Abeta(42), tau, phosphorylated tau(181), and plasma Abeta(40) and Abeta(42) and longitudinal clinical follow-up (from 1 to 8 years).\n\n\nSETTING\nLongitudinal studies of healthy aging and dementia through an AD research center.\n\n\nPARTICIPANTS\nCommunity-dwelling volunteers (n = 139) aged 60 to 91 years and clinically judged as cognitively normal (Clinical Dementia Rating [CDR], 0) or having very mild (CDR, 0.5) or mild (CDR, 1) AD dementia.\n\n\nRESULTS\nIndividuals with very mild or mild AD have reduced mean levels of CSF Abeta(42) and increased levels of CSF tau and phosphorylated tau(181). Cerebrospinal fluid Abeta(42) level completely corresponds with the presence or absence of brain amyloid (imaged with Pittsburgh Compound B) in demented and nondemented individuals. The CSF tau/Abeta(42) ratio (adjusted hazard ratio, 5.21; 95% confidence interval, 1.58-17.22) and phosphorylated tau(181)/Abeta(42) ratio (adjusted hazard ratio, 4.39; 95% confidence interval, 1.62-11.86) predict conversion from a CDR of 0 to a CDR greater than 0.\n\n\nCONCLUSIONS\nThe very mildest symptomatic stage of AD exhibits the same CSF biomarker phenotype as more advanced AD. In addition, levels of CSF Abeta(42), when combined with amyloid imaging, augment clinical methods for identifying in individuals with brain amyloid deposits whether dementia is present or not. Importantly, CSF tau/Abeta(42) ratios show strong promise as antecedent (preclinical) biomarkers that predict future dementia in cognitively normal older adults.",
"title": ""
},
{
"docid": "88f19225cf9cd323804e8ee551bf875a",
"text": "Traceability—the ability to follow the life of software artifacts—is a topic of great interest to software developers in general, and to requirements engineers and model-driven developers in particular. This article aims to bring those stakeholders together by providing an overview of the current state of traceability research and practice in both areas. As part of an extensive literature survey, we identify commonalities and differences in these areas and uncover several unresolved challenges which affect both domains. A good common foundation for further advances regarding these challenges appears to be a combination of the formal basis and the automated recording opportunities of MDD on the one hand, and the more holistic view of traceability in the requirements engineering domain on the other hand.",
"title": ""
},
{
"docid": "970b65468b6afdf572dd8759cea3f742",
"text": "We propose a framework for ensuring safe behavior of a reinforcement learning agent when the reward function may be difficult to specify. In order to do this, we rely on the existence of demonstrations from expert policies, and we provide a theoretical framework for the agent to optimize in the space of rewards consistent with its existing knowledge. We propose two methods to solve the resulting optimization: an exact ellipsoid-based method and a method in the spirit of the \"follow-the-perturbed-leader\" algorithm. Our experiments demonstrate the behavior of our algorithm in both discrete and continuous problems. The trained agent safely avoids states with potential negative effects while imitating the behavior of the expert in the other states.",
"title": ""
},
{
"docid": "2ab6b91f6e5e01b3bb8c8e5c0fbdcf24",
"text": "Application markets such as Apple’s App Store and Google’s Play Store have played an important role in the popularity of smartphones and mobile devices. However, keeping malware out of application markets is an ongoing challenge. While recent work has developed various techniques to determine what applications do, no work has provided a technical approach to answer, what do users expect? In this paper, we present the first step in addressing this challenge. Specifically, we focus on permissions for a given application and examine whether the application description provides any indication for why the application needs a permission. We present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify sentences that describe the need for a given permission in an application description. WHYPER achieves an average precision of 82.8%, and an average recall of 81.5% for three permissions (address book, calendar, and record audio) that protect frequentlyused security and privacy sensitive resources. These results demonstrate great promise in using NLP techniques to bridge the semantic gap between user expectations and application functionality, further aiding the risk assessment of mobile applications.",
"title": ""
},
{
"docid": "94877adef2f6a0fa0219e0d6494dbbc5",
"text": "A miniaturized quadrature hybrid coupler, a rat-race coupler, and a 4 times 4 Butler matrix based on a newly proposed planar artificial transmission line are presented in this paper for application in ultra-high-frequency (UHF) radio-frequency identification (RFID) systems. This planar artificial transmission line is composed of microstrip quasi-lumped elements and their discontinuities and is capable of synthesizing microstrip lines with various characteristic impedances and electrical lengths. At the center frequency of the UHF RFID system, the occupied sizes of the proposed quadrature hybrid and rat-race couplers are merely 27% and 9% of those of the conventional designs. The miniaturized couplers demonstrate well-behaved wideband responses with no spurious harmonics up to two octaves. The measured results reveal excellent agreement with the simulations. Additionally, a 4 times 4 Butler matrix, which may occupy a large amount of circuit area in conventional designs, has been successfully miniaturized with the help of the proposed artificial transmission line. The circuit size of the Butler matrix is merely 21% of that of a conventional design. The experimental results show that the proposed Butler matrix features good phase control, nearly equal power splitting, and compact size and is therefore applicable to the reader modules in various RFID systems.",
"title": ""
},
{
"docid": "31e6da3635ec5f538f15a7b3e2d95e5b",
"text": "Smart electricity meters are currently deployed in millions of households to collect detailed individual electricity consumption data. Compared with traditional electricity data based on aggregated consumption, smart meter data are much more volatile and less predictable. There is a need within the energy industry for probabilistic forecasts of household electricity consumption to quantify the uncertainty of future electricity demand in order to undertake appropriate planning of generation and distribution. We propose to estimate an additive quantile regression model for a set of quantiles of the future distribution using a boosting procedure. By doing so, we can benefit from flexible and interpretable models, which include an automatic variable selection. We compare our approach with three benchmark methods on both aggregated and disaggregated scales using a smart meter data set collected from 3639 households in Ireland at 30-min intervals over a period of 1.5 years. The empirical results demonstrate that our approach based on quantile regression provides better forecast accuracy for disaggregated demand, while the traditional approach based on a normality assumption (possibly after an appropriate Box-Cox transformation) is a better approximation for aggregated demand. These results are particularly useful since more energy data will become available at the disaggregated level in the future.",
"title": ""
},
{
"docid": "91bdfcad73186a545028d922159f0857",
"text": "Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions. Conditional generation enables interactive control, but creating new controls often requires expensive retraining. In this paper, we develop a method to condition generation without retraining the model. By post-hoc learning latent constraints, value functions that identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradient-based optimization or amortized actor functions. Combining attribute constraints with a universal “realism” constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder. Further, using gradient-based optimization, we demonstrate identity-preserving transformations that make the minimal adjustment in latent space to modify the attributes of an image. Finally, with discrete sequences of musical notes, we demonstrate zero-shot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function. Code with dedicated cloud instance has been made publicly available (https://goo.gl/STGMGx).",
"title": ""
},
{
"docid": "368c874a35428310bb0d497045b411f9",
"text": "Triboelectric nanogenerator (TENG) technology has emerged as a new mechanical energy harvesting technology with numerous advantages. This paper analyzes its charging behavior together with a load capacitor. Through numerical and analytical modeling, the charging performance of a TENG with a bridge rectifier under periodic external mechanical motion is completely analogous to that of a dc voltage source in series with an internal resistance. An optimum load capacitance that matches the TENGs impedance is observed for the maximum stored energy. This optimum load capacitance is theoretically detected to be linearly proportional to the charging cycle numbers and the inherent TENG capacitance. Experiments were also performed to further validate our theoretical anticipation and show the potential application of this paper in guiding real experimental designs.",
"title": ""
},
{
"docid": "87a256b5e67b97cf4a11b5664a150295",
"text": "This paper presents a method for speech emotion recognition using spectrograms and deep convolutional neural network (CNN). Spectrograms generated from the speech signals are input to the deep CNN. The proposed model consisting of three convolutional layers and three fully connected layers extract discriminative features from spectrogram images and outputs predictions for the seven emotions. In this study, we trained the proposed model on spectrograms obtained from Berlin emotions dataset. Furthermore, we also investigated the effectiveness of transfer learning for emotions recognition using a pre-trained AlexNet model. Preliminary results indicate that the proposed approach based on freshly trained model is better than the fine-tuned model, and is capable of predicting emotions accurately and efficiently.",
"title": ""
},
{
"docid": "828affae8c3052590591c16f02a55d91",
"text": "We present short elementary proofs of the well-known Ruffini-Abel-Galois theorems on unsolvability of algebraic equations in radicals. This proof is obtained from existing expositions by stripping away material not required for the proof (but presumably required elsewhere). In particular, we do not use the terms ‘Galois group’ and even ‘group’. However, our presentation is a good way to learn (or recall) the starting idea of Galois theory: to look at how the symmetry of a polynomial is decreased when a radical is extracted. So the note provides a bridge (by showing that there is no gap) between elementary mathematics and Galois theory. The note is accessible to students familiar with polynomials, complex numbers and permutations; so the note might be interesting easy reading for professional mathematicians.",
"title": ""
},
{
"docid": "3a5ef0db1fbbebd7c466a3b657e5e173",
"text": "Fully homomorphic encryption is faced with two problems now. One is candidate fully homomorphic encryption schemes are few. Another is that the efficiency of fully homomorphic encryption is a big question. In this paper, we propose a fully homomorphic encryption scheme based on LWE, which has better key size. Our main contributions are: (1) According to the binary-LWE recently, we choose secret key from binary set and modify the basic encryption scheme proposed in Linder and Peikert in 2010. We propose a fully homomorphic encryption scheme based on the new basic encryption scheme. We analyze the correctness and give the proof of the security of our scheme. The public key, evaluation keys and tensored ciphertext have better size in our scheme. (2) Estimating parameters for fully homomorphic encryption scheme is an important work. We estimate the concert parameters for our scheme. We compare these parameters between our scheme and Bra12 scheme. Our scheme have public key and private key that smaller by a factor of about logq than in Bra12 scheme. Tensored ciphertext in our scheme is smaller by a factor of about log2q than in Bra12 scheme. Key switching matrix in our scheme is smaller by a factor of about log3q than in Bra12 scheme.",
"title": ""
},
{
"docid": "98bdcca45140bd3ba7b0c19afa06d5a9",
"text": "Skeletal muscle atrophy is a debilitating response to starvation and many systemic diseases including diabetes, cancer, and renal failure. We had proposed that a common set of transcriptional adaptations underlie the loss of muscle mass in these different states. To test this hypothesis, we used cDNA microarrays to compare the changes in content of specific mRNAs in muscles atrophying from different causes. We compared muscles from fasted mice, from rats with cancer cachexia, streptozotocin-induced diabetes mellitus, uremia induced by subtotal nephrectomy, and from pair-fed control rats. Although the content of >90% of mRNAs did not change, including those for the myofibrillar apparatus, we found a common set of genes (termed atrogins) that were induced or suppressed in muscles in these four catabolic states. Among the strongly induced genes were many involved in protein degradation, including polyubiquitins, Ub fusion proteins, the Ub ligases atrogin-1/MAFbx and MuRF-1, multiple but not all subunits of the 20S proteasome and its 19S regulator, and cathepsin L. Many genes required for ATP production and late steps in glycolysis were down-regulated, as were many transcripts for extracellular matrix proteins. Some genes not previously implicated in muscle atrophy were dramatically up-regulated (lipin, metallothionein, AMP deaminase, RNA helicase-related protein, TG interacting factor) and several growth-related mRNAs were down-regulated (P311, JUN, IGF-1-BP5). Thus, different types of muscle atrophy share a common transcriptional program that is activated in many systemic diseases.",
"title": ""
},
{
"docid": "d565220c9e4b9a4b9f8156434b8b4cd3",
"text": "Decision Support Systems (DDS) have developed to exploit Information Technology (IT) to assist decision-makers in a wide variety of fields. The need to use spatial data in many of these diverse fields has led to increasing interest in the development of Spatial Decision Support Systems (SDSS) based around the Geographic Information System (GIS) technology. The paper examines the relationship between SDSS and GIS and suggests that SDSS is poised for further development owing to improvement in technology and the greater availability of spatial data.",
"title": ""
},
{
"docid": "304f4e48ac5d5698f559ae504fc825d9",
"text": "How the circadian clock regulates the timing of sleep is poorly understood. Here, we identify a Drosophila mutant, wide awake (wake), that exhibits a marked delay in sleep onset at dusk. Loss of WAKE in a set of arousal-promoting clock neurons, the large ventrolateral neurons (l-LNvs), impairs sleep onset. WAKE levels cycle, peaking near dusk, and the expression of WAKE in l-LNvs is Clock dependent. Strikingly, Clock and cycle mutants also exhibit a profound delay in sleep onset, which can be rescued by restoring WAKE expression in LNvs. WAKE interacts with the GABAA receptor Resistant to Dieldrin (RDL), upregulating its levels and promoting its localization to the plasma membrane. In wake mutant l-LNvs, GABA sensitivity is decreased and excitability is increased at dusk. We propose that WAKE acts as a clock output molecule specifically for sleep, inhibiting LNvs at dusk to promote the transition from wake to sleep.",
"title": ""
},
{
"docid": "9e91f7e57e074ec49879598c13035d70",
"text": "Wafer Level Package (WLP) technology has seen tremendous advances in recent years and is rapidly being adopted at the 65nm Low-K silicon node. For a true WLP, the package size is same as the die (silicon) size and the package is usually mounted directly on to the Printed Circuit Board (PCB). Board level reliability (BLR) is a bigger challenge on WLPs than the package level due to a larger CTE mismatch and difference in stiffness between silicon and the PCB [1]. The BLR performance of the devices with Low-K dielectric silicon becomes even more challenging due to their fragile nature and lower mechanical strength. A post fab re-distribution layer (RDL) with polymer stack up provides a stress buffer resulting in an improved board level reliability performance. Drop shock (DS) and temperature cycling test (TCT) are the most commonly run tests in the industry to gauge the BLR performance of WLPs. While a superior drop performance is required for devices targeting mobile handset applications, achieving acceptable TCT performance on WLPs can become challenging at times. BLR performance of WLP is sensitive to design features such as die size, die aspect ratio, ball pattern and ball density etc. In this paper, 65nm WLPs with a post fab Cu RDL have been studied for package and board level reliability. Standard JEDEC conditions are applied during the reliability testing. Here, we present a detailed reliability evaluation on multiple WLP sizes and varying ball patterns. Die size ranging from 10 mm2 to 25 mm2 were studied along with variation in design features such as die aspect ratio and the ball density (fully populated and de-populated ball pattern). All test vehicles used the aforementioned 65nm fab node.",
"title": ""
},
{
"docid": "90d1d78d3d624d3cb1ecc07e8acaefd4",
"text": "Wheat straw is an abundant agricultural residue with low commercial value. An attractive alternative is utilization of wheat straw for bioethanol production. However, production costs based on the current technology are still too high, preventing commercialization of the process. In recent years, progress has been made in developing more effective pretreatment and hydrolysis processes leading to higher yield of sugars. The focus of this paper is to review the most recent advances in pretreatment, hydrolysis and fermentation of wheat straw. Based on the type of pretreatment method applied, a sugar yield of 74-99.6% of maximum theoretical was achieved after enzymatic hydrolysis of wheat straw. Various bacteria, yeasts and fungi have been investigated with the ethanol yield ranging from 65% to 99% of theoretical value. So far, the best results with respect to ethanol yield, final ethanol concentration and productivity were obtained with the native non-adapted Saccharomyses cerevisiae. Some recombinant bacteria and yeasts have shown promising results and are being considered for commercial scale-up. Wheat straw biorefinery could be the near-term solution for clean, efficient and economically-feasible production of bioethanol as well as high value-added products.",
"title": ""
},
{
"docid": "14bcbfcb6e7165e67247453944f37ac0",
"text": "This study investigated whether psychologists' confidence in their clinical decisions is really justified. It was hypothesized that as psychologists study information about a case (a) their confidence about the case increases markedly and steadily but (b) the accuracy of their conclusions about the case quickly reaches a ceiling. 32 judges, including 8 clinical psychologists, read background information about a published case, divided into 4 sections. After reading each section of the case, judges answered a set of 25 questions involving personality judgments about the case. Results strongly supported the hypotheses. Accuracy did not increase significantly with increasing information, but confidence increased steadily and significantly. All judges except 2 became overconfident, most of them markedly so. Clearly, increasing feelings of confidence are not a sure sign of increasing predictive accuracy about a case.",
"title": ""
},
{
"docid": "ba0d63c3e6b8807e1a13b36bc30d5d72",
"text": "Weighted median, in the form of either solver or filter, has been employed in a wide range of computer vision solutions for its beneficial properties in sparsity representation. But it is hard to be accelerated due to the spatially varying weight and the median property. We propose a few efficient schemes to reduce computation complexity from O(r2) to O(r) where r is the kernel size. Our contribution is on a new joint-histogram representation, median tracking, and a new data structure that enables fast data access. The effectiveness of these schemes is demonstrated on optical flow estimation, stereo matching, structure-texture separation, image filtering, to name a few. The running time is largely shortened from several minutes to less than 1 second. The source code is provided in the project website.",
"title": ""
}
] |
scidocsrr
|
63e603175c9009da16d78693caab1772
|
Spectral and Energy-Efficient Wireless Powered IoT Networks: NOMA or TDMA?
|
[
{
"docid": "1a615a022c441f413fcbdb3dbff9e66d",
"text": "Narrowband Internet of Things (NB-IoT) is a new cellular technology introduced in 3GPP Release 13 for providing wide-area coverage for IoT. This article provides an overview of the air interface of NB-IoT. We describe how NB-IoT addresses key IoT requirements such as deployment flexibility, low device complexity, long battery lifetime, support of massive numbers of devices in a cell, and significant coverage extension beyond existing cellular technologies. We also share the various design rationales during the standardization of NB-IoT in Release 13 and point out several open areas for future evolution of NB-IoT.",
"title": ""
},
{
"docid": "29360e31131f37830e0d6271bab63a6e",
"text": "In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated in a cellular downlink scenario with randomly deployed users. The developed analytical results show that NOMA can achieve superior performance in terms of ergodic sum rates; however, the outage performance of NOMA depends critically on the choices of the users' targeted data rates and allocated power. In particular, a wrong choice of the targeted data rates and allocated power can lead to a situation in which the user's outage probability is always one, i.e. the user's targeted quality of service will never be met.",
"title": ""
}
] |
[
{
"docid": "93b87e8dde0de0c1b198f6a073858d80",
"text": "The current project is an initial attempt at validating the Virtual Reality Cognitive Performance Assessment Test (VRCPAT), a virtual environment-based measure of learning and memory. To examine convergent and discriminant validity, a multitrait-multimethod matrix was used in which we hypothesized that the VRCPAT's total learning and memory scores would correlate with other neuropsychological measures involving learning and memory but not with measures involving potential confounds (i.e., executive functions; attention; processing speed; and verbal fluency). Using a sequential hierarchical strategy, each stage of test development did not proceed until specified criteria were met. The 15-minute VRCPAT battery and a 1.5-hour in-person neuropsychological assessment were conducted with a sample of 30 healthy adults, between the ages of 21 and 36, that included equivalent distributions of men and women from ethnically diverse populations. Results supported both convergent and discriminant validity. That is, findings suggest that the VRCPAT measures a capacity that is (a) consistent with that assessed by traditional paper-and-pencil measures involving learning and memory and (b) inconsistent with that assessed by traditional paper-and-pencil measures assessing neurocognitive domains traditionally assumed to be other than learning and memory. We conclude that the VRCPAT is a valid test that provides a unique opportunity to reliably and efficiently study memory function within an ecologically valid environment.",
"title": ""
},
{
"docid": "45494f14c2d9f284dd3ad3a5be49ca78",
"text": "Developing segmentation techniques for overlapping cells has become a major hurdle for automated analysis of cervical cells. In this paper, an automated three-stage segmentation approach to segment the nucleus and cytoplasm of each overlapping cell is described. First, superpixel clustering is conducted to segment the image into small coherent clusters that are used to generate a refined superpixel map. The refined superpixel map is passed to an adaptive thresholding step to initially segment the image into cellular clumps and background. Second, a linear classifier with superpixel-based features is designed to finalize the separation between nuclei and cytoplasm. Finally, edge and region based cell segmentation are performed based on edge enhancement process, gradient thresholding, morphological operations, and region properties evaluation on all detected nuclei and cytoplasm pairs. The proposed framework has been evaluated using the ISBI 2014 challenge dataset. The dataset consists of 45 synthetic cell images, yielding 270 cells in total. Compared with the state-of-the-art approaches, our approach provides more accurate nuclei boundaries, as well as successfully segments most of overlapping cells.",
"title": ""
},
{
"docid": "fd9461aeac51be30c9d0fbbba298a79b",
"text": "Disaster management is a crucial and urgent research issue. Emergency communication networks (ECNs) provide fundamental functions for disaster management, because communication service is generally unavailable due to large-scale damage and restrictions in communication services. Considering the features of a disaster (e.g., limited resources and dynamic changing of environment), it is always a key problem to use limited resources effectively to provide the best communication services. Big data analytics in the disaster area provides possible solutions to understand the situations happening in disaster areas, so that limited resources can be optimally deployed based on the analysis results. In this paper, we survey existing ECNs and big data analytics from both the content and the spatial points of view. From the content point of view, we survey existing data mining and analysis techniques, and further survey and analyze applications and the possibilities to enhance ECNs. From the spatial point of view, we survey and discuss the most popular methods and further discuss the possibility to enhance ECNs. Finally, we highlight the remaining challenging problems after a systematic survey and studies of the possibilities.",
"title": ""
},
{
"docid": "cb266f07461a58493d35f75949c4605e",
"text": "Zero shot learning in Image Classification refers to the setting where images from some novel classes are absent in the training data but other information such as natural language descriptions or attribute vectors of the classes are available. This setting is important in the real world since one may not be able to obtain images of all the possible classes at training. While previous approaches have tried to model the relationship between the class attribute space and the image space via some kind of a transfer function in order to model the image space correspondingly to an unseen class, we take a different approach and try to generate the samples from the given attributes, using a conditional variational autoencoder, and use the generated samples for classification of the unseen classes. By extensive testing on four benchmark datasets, we show that our model outperforms the state of the art, particularly in the more realistic generalized setting, where the training classes can also appear at the test time along with the novel classes.",
"title": ""
},
{
"docid": "9a7c915803c84bc2270896bd82b4162d",
"text": "In this paper we present a voice command and mouth gesture based robot command interface which is capable of controlling three degrees of freedom. The gesture set was designed in order to avoid head rotation and translation, and thus relying solely in mouth movements. Mouth segmentation is performed by using the normalized a* component, as in [1]. The gesture detection process is carried out by a Gaussian Mixture Model (GMM) based classifier. After that, a state machine stabilizes the system response by restricting the number of possible movements depending on the initial state. Voice commands are modeled using a Hidden Markov Model (HMM) isolated word recognition scheme. The interface was designed taking into account the specific pose restrictions found in the DaVinci Assisted Surgery command console.",
"title": ""
},
{
"docid": "4e6709bf897352c4e8b24a5b77e4e2c5",
"text": "Large-scale classification is an increasingly critical Big Data problem. So far, however, very little has been published on how this is done in practice. In this paper we describe Chimera, our solution to classify tens of millions of products into 5000+ product types at WalmartLabs. We show that at this scale, many conventional assumptions regarding learning and crowdsourcing break down, and that existing solutions cease to work. We describe how Chimera employs a combination of learning, rules (created by in-house analysts), and crowdsourcing to achieve accurate, continuously improving, and cost-effective classification. We discuss a set of lessons learned for other similar Big Data systems. In particular, we argue that at large scales crowdsourcing is critical, but must be used in combination with learning, rules, and in-house analysts. We also argue that using rules (in conjunction with learning) is a must, and that more research attention should be paid to helping analysts create and manage (tens of thousands of) rules more effectively.",
"title": ""
},
{
"docid": "344e5742cc3c1557589cea05b429d743",
"text": "Herein we present a novel big-data framework for healthcare applications. Healthcare data is well suited for bigdata processing and analytics because of the variety, veracity and volume of these types of data. In recent times, many areas within healthcare have been identified that can directly benefit from such treatment. However, setting up these types of architecture is not trivial. We present a novel approach of building a big-data framework that can be adapted to various healthcare applications with relative use, making this a one-stop “Big-Data-Healthcare-in-a-Box”.",
"title": ""
},
{
"docid": "ddc18f2d129d95737b8f0591560d202d",
"text": "A variety of real-life mobile sensing applications are becoming available, especially in the life-logging, fitness tracking and health monitoring domains. These applications use mobile sensors embedded in smart phones to recognize human activities in order to get a better understanding of human behavior. While progress has been made, human activity recognition remains a challenging task. This is partly due to the broad range of human activities as well as the rich variation in how a given activity can be performed. Using features that clearly separate between activities is crucial. In this paper, we propose an approach to automatically extract discriminative features for activity recognition. Specifically, we develop a method based on Convolutional Neural Networks (CNN), which can capture local dependency and scale invariance of a signal as it has been shown in speech recognition and image recognition domains. In addition, a modified weight sharing technique, called partial weight sharing, is proposed and applied to accelerometer signals to get further improvements. The experimental results on three public datasets, Skoda (assembly line activities), Opportunity (activities in kitchen), Actitracker (jogging, walking, etc.), indicate that our novel CNN-based approach is practical and achieves higher accuracy than existing state-of-the-art methods.",
"title": ""
},
{
"docid": "48aa68862748ab502f3942300b4d8e1e",
"text": "While data volumes continue to rise, the capacity of human attention remains limited. As a result, users need analytics engines that can assist in prioritizing attention in this fast data that is too large for manual inspection. We present a set of design principles for the design of fast data analytics engines that leverage the relative scarcity of human attention and overabundance of data: return fewer results, prioritize iterative analysis, and filter fast to compute less. We report on our early experiences employing these principles in the design and deployment of MacroBase, an open source analysis engine for prioritizing attention in fast data. By combining streaming operators for feature transformation, classification, and data summarization, MacroBase provides users with interpretable explanations of key behaviors, acting as a search engine for fast data.",
"title": ""
},
{
"docid": "b4a2c3679fe2490a29617c6a158b9dbc",
"text": "We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.",
"title": ""
},
{
"docid": "77b78ec70f390289424cade3850fc098",
"text": "As the primary barrier between an organism and its environment, epithelial cells are well-positioned to regulate tolerance while preserving immunity against pathogens. Class II major histocompatibility complex molecules (MHC class II) are highly expressed on the surface of epithelial cells (ECs) in both the lung and intestine, although the functional consequences of this expression are not fully understood. Here, we summarize current information regarding the interactions that regulate the expression of EC MHC class II in health and disease. We then evaluate the potential role of EC as non-professional antigen presenting cells. Finally, we explore future areas of study and the potential contribution of epithelial surfaces to gut-lung crosstalk.",
"title": ""
},
{
"docid": "1c915d0ffe515aa2a7c52300d86e90ba",
"text": "This paper presents a tool developed for the purpose of assessing teaching presence in online courses that make use of computer conferencing, and preliminary results from the use of this tool. The method of analysis is based on Garrison, Anderson, and Archer’s [1] model of critical thinking and practical inquiry in a computer conferencing context. The concept of teaching presence is constitutively defined as having three categories – design and organization, facilitating discourse, and direct instruction. Indicators that we search for in the computer conference transcripts identify each category. Pilot testing of the instrument reveals interesting differences in the extent and type of teaching presence found in different graduate level online courses.",
"title": ""
},
{
"docid": "c3e4ef9e9fd5b6301cb0a07ced5c02fc",
"text": "The classification problem of assigning several observations into different disjoint groups plays an important role in business decision making and many other areas. Developing more accurate and widely applicable classification models has significant implications in these areas. It is the reason that despite of the numerous classification models available, the research for improving the effectiveness of these models has never stopped. Combining several models or using hybrid models has become a common practice in order to overcome the deficiencies of single models and can be an effective way of improving upon their predictive performance, especially when the models in combination are quite different. In this paper, a novel hybridization of artificial neural networks (ANNs) is proposed using multiple linear regression models in order to yield more general and more accurate model than traditional artificial neural networks for solving classification problems. Empirical results indicate that the proposed hybrid model exhibits effectively improved classification accuracy in comparison with traditional artificial neural networks and also some other classification models such as linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), K-nearest neighbor (KNN), and support vector machines (SVMs) using benchmark and real-world application data sets. These data sets vary in the number of classes (two versus multiple) and the source of the data (synthetic versus real-world). Therefore, it can be applied as an appropriate alternate approach for solving classification problems, specifically when higher forecasting",
"title": ""
},
{
"docid": "82a0169afe20e2965f7fdd1a8597b7d3",
"text": "Accurate face recognition is critical for many security applications. Current automatic face-recognition systems are defeated by natural changes in lighting and pose, which often affect face images more profoundly than changes in identity. The only system that can reliably cope with such variability is a human observer who is familiar with the faces concerned. We modeled human familiarity by using image averaging to derive stable face representations from naturally varying photographs. This simple procedure increased the accuracy of an industry standard face-recognition algorithm from 54% to 100%, bringing the robust performance of a familiar human to an automated system.",
"title": ""
},
{
"docid": "387634b226820f3aa87fede466acd6c2",
"text": "Objectives To evaluate the ability of a short-form FCE to predict future timely and sustained return-to-work. Methods A prospective cohort study was conducted using data collected during a cluster RCT. Subject performance on the items in the short-form FCE was compared to administrative recovery outcomes from a workers’ compensation database. Outcomes included days to claim closure, days to time loss benefit suspension and future recurrence (defined as re-opening a closed claim, restarting benefits, or filing a new claim for injury to the same body region). Analysis included multivariable Cox and logistic regression using a risk factor modeling strategy. Potential confounders included age, sex, injury duration, and job attachment status, among others. Results The sample included 147 compensation claimants with a variety of musculoskeletal injuries. Subjects who demonstrated job demand levels on all FCE items were more likely to have their claims closed (adjusted Hazard Ratio 5.52 (95% Confidence Interval 3.42–8.89), and benefits suspended (adjusted Hazard Ratio 5.45 (95% Confidence Interval 2.73–10.85) over the follow-up year. The proportion of variance explained by the FCE ranged from 18 to 27%. FCE performance was not significantly associated with future recurrence. Conclusion A short-form FCE appears to provide useful information for predicting time to recovery as measured through administrative outcomes, but not injury recurrence. The short-form FCE may be an efficient option for clinicians using FCE in the management of injured workers.",
"title": ""
},
{
"docid": "9fd247bb0f45d09e11c05fca48372ee8",
"text": "Based on the CSMC 0.6um 40V BCD process and the bandgap principle a reference circuit used in high voltage chip is designed. The simulation results show that a temperature coefficient of 26.5ppm/°C in the range of 3.5∼40V supply, the output voltage is insensitive to the power supply, when the supply voltage rages from 3.5∼40V, the output voltage is equal to 1.2558V to 1.2573V at room temperature. The circuit we designed has high precision and stability, thus it can be used as stability reference voltage in power management IC.",
"title": ""
},
{
"docid": "0d0fae25e045c730b68d63e2df1dfc7f",
"text": "It is very difficult to over-emphasize the benefits of accurate data. Errors in data are generally the most expensive aspect of data entry, costing the users even much more compared to the original data entry. Unfortunately, these costs are intangibles or difficult to measure. If errors are detected at an early stage then it requires little cost to remove the errors. Incorrect and misleading data lead to all sorts of unpleasant and unnecessary expenses. Unluckily, it would be very expensive to correct the errors after the data has been processed, particularly when the processed data has been converted into the knowledge for decision making. No doubt a stitch in time saves nine i.e. a timely effort will prevent more work at later stage. Moreover, time spent in processing errors can also have a significant cost. One of the major problems with automated data entry systems are errors. In this paper we discuss many well known techniques to minimize errors, different cleansing approaches and, suggest how we can improve accuracy rate. Framework available for data cleansing offer the fundamental services such as attribute selection, formation of tokens, selection of clustering algorithms, selection of eliminator functions etc.",
"title": ""
},
{
"docid": "04373474e0d9902fdee169492ece6dd0",
"text": "The development of ivermectin as a complementary vector control tool will require good quality evidence. This paper reviews the different eco-epidemiological contexts in which mass drug administration with ivermectin could be useful. Potential scenarios and pharmacological strategies are compared in order to help guide trial design. The rationale for a particular timing of an ivermectin-based tool and some potentially useful outcome measures are suggested.",
"title": ""
},
{
"docid": "5c1d6a2616a54cd8d8316b8d37f0147d",
"text": "Cadmium (Cd) is a toxic, nonessential transition metal and contributes a health risk to humans, including various cancers and cardiovascular diseases; however, underlying molecular mechanisms remain largely unknown. Cells transmit information to the next generation via two distinct ways: genetic and epigenetic. Chemical modifications to DNA or histone that alters the structure of chromatin without change of DNA nucleotide sequence are known as epigenetics. These heritable epigenetic changes include DNA methylation, post-translational modifications of histone tails (acetylation, methylation, phosphorylation, etc), and higher order packaging of DNA around nucleosomes. Apart from DNA methyltransferases, histone modification enzymes such as histone acetyltransferase, histone deacetylase, and methyltransferase, and microRNAs (miRNAs) all involve in these epigenetic changes. Recent studies indicate that Cd is able to induce various epigenetic changes in plant and mammalian cells in vitro and in vivo. Since aberrant epigenetics plays a critical role in the development of various cancers and chronic diseases, Cd may cause the above-mentioned pathogenic risks via epigenetic mechanisms. Here we review the in vitro and in vivo evidence of epigenetic effects of Cd. The available findings indicate that epigenetics occurred in association with Cd induction of malignant transformation of cells and pathological proliferation of tissues, suggesting that epigenetic effects may play a role in Cd toxic, particularly carcinogenic effects. The future of environmental epigenomic research on Cd should include the role of epigenetics in determining long-term and late-onset health effects following Cd exposure.",
"title": ""
}
] |
scidocsrr
|
84e93effd1fc051cbfd1fdda0017d7f0
|
A review on deep learning for recommender systems: challenges and remedies
|
[
{
"docid": "659b3c56790b92b5b02dcdbab76bef0c",
"text": "Recommender systems based on deep learning technology pay huge attention recently. In this paper, we propose a collaborative filtering based recommendation algorithm that utilizes the difference of similarities among users derived from different layers in stacked denoising autoencoders. Since different layers in a stacked autoencoder represent the relationships among items with rating at different levels of abstraction, we can expect to make recommendations more novel, various and serendipitous, compared with a normal collaborative filtering using single similarity. The results of experiments using MovieLens dataset show that the proposed recommendation algorithm can improve the diversity of recommendation lists without great loss of accuracy.",
"title": ""
},
{
"docid": "89fd46da8542a8ed285afb0cde9cc236",
"text": "Collaborative Filtering with Implicit Feedbacks (e.g., browsing or clicking records), named as CF-IF, is demonstrated to be an effective way in recommender systems. Existing works of CF-IF can be mainly classified into two categories, i.e., point-wise regression based and pairwise ranking based, where the latter one relaxes assumption and usually obtains better performance in empirical studies. In real applications, implicit feedback is often very sparse, causing CF-IF based methods to degrade significantly in recommendation performance. In this case, side information (e.g., item content) is usually introduced and utilized to address the data sparsity problem. Nevertheless, the latent feature representation learned from side information by topic model may not be very effective when the data is too sparse. To address this problem, we propose collaborative deep ranking (CDR), a hybrid pair-wise approach with implicit feedback, which leverages deep feature representation of item content into Bayesian framework of pair-wise ranking model in this paper. The experimental analysis on a real-world dataset shows CDR outperforms three state-of-art methods in terms of recall metric under different sparsity level.",
"title": ""
}
] |
[
{
"docid": "e2666b0eed30a4eed2ad0cde07324d73",
"text": "It is logical that the requirement for antioxidant nutrients depends on a person's exposure to endogenous and exogenous reactive oxygen species. Since cigarette smoking results in an increased cumulative exposure to reactive oxygen species from both sources, it would seem cigarette smokers would have an increased requirement for antioxidant nutrients. Logic dictates that a diet high in antioxidant-rich foods such as fruits, vegetables, and spices would be both protective and a prudent preventive strategy for smokers. This review examines available evidence of fruit and vegetable intake, and supplementation of antioxidant compounds by smokers in an attempt to make more appropriate nutritional recommendations to this population.",
"title": ""
},
{
"docid": "83ad15e2ffeebb21705b617646dc4ed7",
"text": "As Twitter becomes a more common means for officials to communicate with their constituents, it becomes more important that we understand how officials use these communication tools. Using data from 380 members of Congress' Twitter activity during the winter of 2012, we find that officials frequently use Twitter to advertise their political positions and to provide information but rarely to request political action from their constituents or to recognize the good work of others. We highlight a number of differences in communication frequency between men and women, Senators and Representatives, Republicans and Democrats. We provide groundwork for future research examining the behavior of public officials online and testing the predictive power of officials' social media behavior.",
"title": ""
},
{
"docid": "28ff3b1e9f29d7ae4b57f6565330cde5",
"text": "To identify the effects of core stabilization exercise on the Cobb angle and lumbar muscle strength of adolescent patients with idiopathic scoliosis. Subjects in the present study consisted of primary school students who were confirmed to have scoliosis on radiologic examination performed during their visit to the National Fitness Center in Seoul, Korea. Depending on whether they participated in a 12-week core stabilization exercise program, subjects were divided into the exercise (n=14, age 12.71±0.72 years) or control (n=15, age 12.80±0.86 years) group. The exercise group participated in three sessions of core stabilization exercise per week for 12 weeks. The Cobb angle, flexibility, and lumbar muscle strength tests were performed before and after core stabilization exercise. Repeated-measure two-way analysis of variance was performed to compare the treatment effects between the exercise and control groups. There was no significant difference in thoracic Cobb angle between the groups. The exercise group had a significant decrease in the lumbar Cobb angle after exercise compared to before exercise (P<0.001). The exercise group also had a significant increase in lumbar flexor and extensor muscles strength after exercise compared to before exercise (P<0.01 and P<0.001, respectively). Core stabilization exercise can be an effective therapeutic exercise to decrease the Cobb angle and improve lumbar muscle strength in adolescents with idiopathic scoliosis.",
"title": ""
},
{
"docid": "ce5efa83002cee32a5ef8b8b73b81a60",
"text": "Registering a 3D facial model to a 2D image under occlusion is difficult. First, not all of the detected facial landmarks are accurate under occlusions. Second, the number of reliable landmarks may not be enough to constrain the problem. We propose a method to synthesize additional points (Sensible Points) to create pose hypotheses. The visual clues extracted from the fiducial points, non-fiducial points, and facial contour are jointly employed to verify the hypotheses. We define a reward function to measure whether the projected dense 3D model is well-aligned with the confidence maps generated by two fully convolutional networks, and use the function to train recurrent policy networks to move the Sensible Points. The same reward function is employed in testing to select the best hypothesis from a candidate pool of hypotheses. Experimentation demonstrates that the proposed approach is very promising in solving the facial model registration problem under occlusion.",
"title": ""
},
{
"docid": "255a155986548bb873ee0bc88a00222b",
"text": "Patterns of neural activity are systematically elicited as the brain experiences categorical stimuli and a major challenge is to understand what these patterns represent. Two influential approaches, hitherto treated as separate analyses, have targeted this problem by using model-representations of stimuli to interpret the corresponding neural activity patterns. Stimulus-model-based-encoding synthesizes neural activity patterns by first training weights to map between stimulus-model features and voxels. This allows novel model-stimuli to be mapped into voxel space, and hence the strength of the model to be assessed by comparing predicted against observed neural activity. Representational Similarity Analysis (RSA) assesses models by testing how well the grand structure of pattern-similarities measured between all pairs of model-stimuli aligns with the same structure computed from neural activity patterns. RSA does not require model fitting, but also does not allow synthesis of neural activity patterns, thereby limiting its applicability. We introduce a new approach, representational similarity-encoding, that builds on the strengths of RSA and robustly enables stimulus-model-based neural encoding without model fitting. The approach therefore sidesteps problems associated with overfitting that notoriously confront any approach requiring parameter estimation (and is consequently low cost computationally), and importantly enables encoding analyses to be incorporated within the wider Representational Similarity Analysis framework. We illustrate this new approach by using it to synthesize and decode fMRI patterns representing the meanings of words, and discuss its potential biological relevance to encoding in semantic memory. Our new similarity-based encoding approach unites the two previously disparate methods of encoding models and RSA, capturing the strengths of both, and enabling similarity-based synthesis of predicted fMRI patterns.",
"title": ""
},
{
"docid": "91bbea10b8df8a708b65947c8a8832dc",
"text": "Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. The precise and arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally different from the time-series whereby series is indexed with fixed and equal time interval. One expressive mathematical tool for modeling event is point process. The intensity functions of many point processes involve two components: the background and the effect by the history. Due to its inherent spontaneousness, the background can be treated as a time series while the other need to handle the history events. In this paper, we model the background by a Recurrent Neural Network (RNN) with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics. The whole model with event type and timestamp prediction output layers can be trained end-to-end. Our approach takes an RNN perspective to point process, and models its background and history effect. For utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form in point processes. Meanwhile end-to-end training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank headquartered in North America.",
"title": ""
},
{
"docid": "9083d1159628f0b9a363aca5dea47591",
"text": "Cocitation and co-word methods have long been used to detect and track emerging topics in scientific literature, but both have weaknesses. Recently, while many researchers have adopted generative probabilistic models for topic detection and tracking, few have compared generative probabilistic models with traditional cocitation and co-word methods in terms of their overall performance. In this article, we compare the performance of hierarchical Dirichlet process (HDP), a promising generative probabilistic model, with that of the 2 traditional topic detecting and tracking methods— cocitation analysis and co-word analysis. We visualize and explore the relationships between topics identified by the 3 methods in hierarchical edge bundling graphs and time flow graphs. Our result shows that HDP is more sensitive and reliable than the other 2 methods in both detecting and tracking emerging topics. Furthermore, we demonstrate the important topics and topic evolution trends in the literature of terrorism research with the HDP method.",
"title": ""
},
{
"docid": "4c82ba56d6532ddc57c2a2978de7fe5a",
"text": "This paper presents a Model Reference Adaptive System (MRAS) based speed sensorless estimation of vector controlled Induction Motor Drive. MRAS based techniques are one of the best methods to estimate the rotor speed due to its performance and straightforward stability approach. Depending on the type of tuning signal driving the adaptation mechanism, MRAS estimators are classified into rotor flux based MRAS, back e.m.f based MRAS, reactive power based MRAS and artificial neural network based MRAS. In this paper, the performance of the rotor flux based MRAS for estimating the rotor speed was studied. Overview on the IM mathematical model is briefly summarized to establish a physical basis for the sensorless scheme used. Further, the theoretical basis of indirect field oriented vector control is explained in detail and it is implemented in MATLAB/SIMULINK.",
"title": ""
},
{
"docid": "179298e4aa5fbd8a9ea08bde263ceaf5",
"text": "Healthcare applications are considered as promising fields for wireless sensor networks, where patients can be monitored using wireless medical sensor networks (WMSNs). Current WMSN healthcare research trends focus on patient reliable communication, patient mobility, and energy-efficient routing, as a few examples. However, deploying new technologies in healthcare applications without considering security makes patient privacy vulnerable. Moreover, the physiological data of an individual are highly sensitive. Therefore, security is a paramount requirement of healthcare applications, especially in the case of patient privacy, if the patient has an embarrassing disease. This paper discusses the security and privacy issues in healthcare application using WMSNs. We highlight some popular healthcare projects using wireless medical sensor networks, and discuss their security. Our aim is to instigate discussion on these critical issues since the success of healthcare application depends directly on patient security and privacy, for ethic as well as legal reasons. In addition, we discuss the issues with existing security mechanisms, and sketch out the important security requirements for such applications. In addition, the paper reviews existing schemes that have been recently proposed to provide security solutions in wireless healthcare scenarios. Finally, the paper ends up with a summary of open security research issues that need to be explored for future healthcare applications using WMSNs.",
"title": ""
},
{
"docid": "3e98e933aff32193fe4925f39fd04198",
"text": "Estimating surface normals is an important task in computer vision, e.g. in surface reconstruction, registration and object detection. In stereo vision, the error of depth reconstruction increases quadratically with distance. This makes estimation of surface normals an especially demanding task. In this paper, we analyze how error propagates from noisy disparity data to the orientation of the estimated surface normal. Firstly, we derive a transformation for normals between disparity space and world coordinates. Afterwards, the propagation of disparity noise is analyzed by means of a Monte Carlo method. Normal reconstruction at a pixel position requires to consider a certain neighborhood of the pixel. The extent of this neighborhood affects the reconstruction error. Our method allows to determine the optimal neighborhood size required to achieve a pre specified deviation of the angular reconstruction error, defined by a confidence interval. We show that the reconstruction error only depends on the distance of the surface point to the camera, the pixel distance to the principal point in the image plane and the angle at which the viewing ray intersects the surface.",
"title": ""
},
{
"docid": "7db9cf29dd676fa3df5a2e0e95842b6e",
"text": "We present a novel approach to still image denoising based on e ective filtering in 3D transform domain by combining sliding-window transform processing with block-matching. We process blocks within the image in a sliding manner and utilize the block-matching concept by searching for blocks which are similar to the currently processed one. The matched blocks are stacked together to form a 3D array and due to the similarity between them, the data in the array exhibit high level of correlation. We exploit this correlation by applying a 3D decorrelating unitary transform and e ectively attenuate the noise by shrinkage of the transform coe cients. The subsequent inverse 3D transform yields estimates of all matched blocks. After repeating this procedure for all image blocks in sliding manner, the final estimate is computed as weighed average of all overlapping blockestimates. A fast and e cient algorithm implementing the proposed approach is developed. The experimental results show that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.",
"title": ""
},
{
"docid": "6b064b9f4c90a60fab788f9d5aee8b58",
"text": "Extracorporeal photopheresis (ECP) is a technique that was developed > 20 years ago to treat erythrodermic cutaneous T-cell lymphoma (CTCL). The technique involves removal of peripheral blood, separation of the buffy coat, and photoactivation with a photosensitizer and ultraviolet A irradiation before re-infusion of cells. More than 1000 patients with CTCL have been treated with ECP, with response rates of 31-100%. ECP has been used in a number of other conditions, most widely in the treatment of chronic graft-versus-host disease (cGvHD) with response rates of 29-100%. ECP has also been used in several other autoimmune diseases including acute GVHD, solid organ transplant rejection and Crohn's disease, with some success. ECP is a relatively safe procedure, and side-effects are typically mild and transient. Severe reactions including vasovagal syncope or infections are uncommon. This is very valuable in conditions for which alternative treatments are highly toxic. The mechanism of action of ECP remains elusive. ECP produces a number of immunological changes and in some patients produces immune homeostasis with resultant clinical improvement. ECP is available in seven centres in the UK. Experts from all these centres formed an Expert Photopheresis Group and published the UK consensus statement for ECP in 2008. All centres consider patients with erythrodermic CTCL and steroid-refractory cGvHD for treatment. The National Institute for Health and Clinical Excellence endorsed the use of ECP for CTCL and suggested a need for expansion while recommending its use in specialist centres. ECP is safe, effective, and improves quality of life in erythrodermic CTCL and cGvHD, and should be more widely available for these patients.",
"title": ""
},
{
"docid": "598dd39ec35921242b94f17e24b30389",
"text": "In this paper, we present a study on the characterization and the classification of textures. This study is performed using a set of values obtained by the computation of indexes. To obtain these indexes, we extract a set of data with two techniques: the computation of matrices which are statistical representations of the texture and the computation of \"measures\". These matrices and measures are subsequently used as parameters of a function bringing real or discrete values which give information about texture features. A model of texture characterization is built based on this numerical information, for example to classify textures. An application is proposed to classify cells nuclei in order to diagnose patients affected by the Progeria disease.",
"title": ""
},
{
"docid": "5bde29ce109714f623ae9d69184a8708",
"text": "Adaptive beamforming methods are known to degrade if some of underlying assumptions on the environment, sources, or sensor array become violated. In particular, if the desired signal is present in training snapshots, the adaptive array performance may be quite sensitive even to slight mismatches between the presumed and actual signal steering vectors (spatial signatures). Such mismatches can occur as a result of environmental nonstationarities, look direction errors, imperfect array calibration, distorted antenna shape, as well as distortions caused by medium inhomogeneities, near–far mismatch, source spreading, and local scattering. The similar type of performance degradation can occur when the signal steering vector is known exactly but the training sample size is small. In this paper, we develop a new approach to robust adaptive beamforming in the presence of an arbitrary unknown signal steering vector mismatch. Our approach is based on the optimization of worst-case performance. It turns out that the natural formulation of this adaptive beamforming problem involves minimization of a quadratic function subject to infinitely many nonconvex quadratic constraints. We show that this (originally intractable) problem can be reformulated in a convex form as the so-called second-order cone (SOC) program and solved efficiently (in polynomial time) using the well-established interior point method. It is also shown that the proposed technique can be interpreted in terms of diagonal loading where the optimal value of the diagonal loading factor is computed based on the known level of uncertainty of the signal steering vector. Computer simulations with several frequently encountered types of signal steering vector mismatches show better performance of our robust beamformer as compared with existing adaptive beamforming algorithms.",
"title": ""
},
{
"docid": "e8a0e37fcb90f43785da710792b02c3c",
"text": "As the use of Twitter has become more commonplace throughout many nations, its role in political discussion has also increased. This has been evident in contexts ranging from general political discussion through local, state, and national elections (such as in the 2010 Australian elections) to protests and other activist mobilisation (for example in the current uprisings in Tunisia, Egypt, and Yemen, as well as in the controversy around Wikileaks). Research into the use of Twitter in such political contexts has also developed rapidly, aided by substantial advancements in quantitative and qualitative methodologies for capturing, processing, analysing, and visualising Twitter updates by large groups of users. Recent work has especially highlighted the role of the Twitter hashtag – a short keyword, prefixed with the hash symbol ‘#’ – as a means of coordinating a distributed discussion between more or less large groups of users, who do not need to be connected through existing ‘follower’ networks. Twitter hashtags – such as ‘#ausvotes’ for the 2010 Australian elections, ‘#londonriots’ for the coordination of information and political debates around the recent unrest in London, or ‘#wikileaks’ for the controversy around Wikileaks thus aid the formation of ad hoc publics around specific themes and topics. They emerge from within the Twitter community – sometimes as a result of pre-planning or quickly reached consensus, sometimes through protracted debate about what the appropriate hashtag for an event or topic should be (which may also lead to the formation of competing publics using different hashtags). Drawing on innovative methodologies for the study of Twitter content, this paper examines the use of hashtags in political debate in the context of a number of major case studies.",
"title": ""
},
{
"docid": "f117503bf48ea9ddf575dedf196d3bcd",
"text": "In recent years, prison officials have increasingly turned to solitary confinement as a way to manage difficult or dangerous prisoners. Many of the prisoners subjected to isolation, which can extend for years, have serious mental illness, and the conditions of solitary confinement can exacerbate their symptoms or provoke recurrence. Prison rules for isolated prisoners, however, greatly restrict the nature and quantity of mental health services that they can receive. In this article, we describe the use of isolation (called segregation by prison officials) to confine prisoners with serious mental illness, the psychological consequences of such confinement, and the response of U.S. courts and human rights experts. We then address the challenges and human rights responsibilities of physicians confronting this prison practice. We conclude by urging professional organizations to adopt formal positions against the prolonged isolation of prisoners with serious mental illness.",
"title": ""
},
{
"docid": "1dd4a95adcd4f9e7518518148c3605ac",
"text": "Kernel modules are an integral part of most operating systems (OS) as they provide flexible ways of adding new functionalities (such as file system or hardware support) to the kernel without the need to recompile or reload the entire kernel. Aside from providing an interface between the user and the hardware, these modules maintain system security and reliability. Malicious kernel level exploits (e.g. code injections) provide a gateway to a system's privileged level where the attacker has access to an entire system. Such attacks may be detected by performing code integrity checks. Several commodity operating systems (such as Linux variants and MS Windows) maintain signatures of different pieces of kernel code in a database for code integrity checking purposes. However, it quickly becomes cumbersome and time consuming to maintain a database of legitimate dynamic changes in the code, such as regular module updates. In this paper we present Mod Checker, which checks in-memory kernel modules' code integrity in real time without maintaining a database of hashes. Our solution applies to virtual environments that have multiple virtual machines (VMs) running the same version of the operating system, an environment commonly found in large cloud servers. Mod Checker compares kernel module among a pool of VMs within a cloud. We thoroughly evaluate the effectiveness and runtime performance of Mod Checker and conclude that Mod Checker is able to detect any change in a kernel module's headers and executable content with minimal or no impact on the guest operating systems' performance.",
"title": ""
},
{
"docid": "e7d8f97d7d76ae089842e602b91df21c",
"text": "In this paper, we propose a novel text representation paradigm and a set of follow-up text representation models based on cognitive psychology theories. The intuition of our study is that the knowledge implied in a large collection of documents may improve the understanding of single documents. Based on cognitive psychology theories, we propose a general text enrichment framework, study the key factors to enable activation of implicit information, and develop new text representation methods to enrich text with the implicit information. Our study aims to mimic some aspects of human cognitive procedure in which given stimulant words serve to activate understanding implicit concepts. By incorporating human cognition into text representation, the proposed models advance existing studies by mining implicit information from given text and coordinating with most existing text representation approaches at the same time, which essentially bridges the gap between explicit and implicit information. Experiments on multiple tasks show that the implicit information activated by our proposed models matches human intuition and significantly improves the performance of the text mining tasks as well.",
"title": ""
},
{
"docid": "2a98cfbd1036ef0cdccbc491b33a9af4",
"text": "Optimal facility layout is of the factors that can affect the efficiency of any organization and annually millions of dollars costs are created for it or profits are saved. Studies done in the field of facility layout can be classified into two general categories: Facility layout issues in static position and facility layout issues in dynamic position. Because that facilities dynamic layout problem is realistic this paper investigates it and tries to consider all necessary aspects of this issue to become it more practical. In this regard, this research has developed three objectives model and tries to simultaneously minimize total operating costs and also production time. Since the calculation of production time using analytical relations is impossible, this research using simulation and regression analysis of a statistical correlation tries to measure production time. So the developed model is a combination of analytical and statistical relationships. The proposed model is of NP-HARD issues, so that even finding an optimal solution for its small scale is very difficult and time consuming. Multi-objective meta-heuristic NSGAII and NRGA algorithms are used to solve the problem. Since the outputs of meta-heuristic algorithms are highly dependent on the algorithms input parameters, Taguchi experimental design method is also used to set parameters. Alsoin order to assess the efficiency of provided procedures, the proposed method has been analyzed on generated pilot issues with various aspects. The results of the comparing algorithms on several criteria, consistently show the superiority of the NSGA-II than NRGA in problem solving.",
"title": ""
}
] |
scidocsrr
|
dba48ea89c1a44ac1955dfd6e31a9f91
|
Large Scale Log Analytics through ELK
|
[
{
"docid": "5a63b6385068fbc24d1d79f9a6363172",
"text": "Big Data Analytics and Deep Learning are two high-focus of data science. Big Data has become important as many organizations both public and private have been collecting massive amounts of domain-specific information, which can contain useful information about problems such as national intelligence, cyber security, fraud detection, marketing, and medical informatics. Companies such as Google and Microsoft are analyzing large volumes of data for business analysis and decisions, impacting existing and future technology. Deep Learning algorithms extract high-level, complex abstractions as data representations through a hierarchical learning process. Complex abstractions are learnt at a given level based on relatively simpler abstractions formulated in the preceding level in the hierarchy. A key benefit of Deep Learning is the analysis and learning of massive amounts of unsupervised data, making it a valuable tool for Big Data Analytics where raw data is largely unlabeled and un-categorized. In the present study, we explore how Deep Learning can be utilized for addressing some important problems in Big Data Analytics, including extracting complex patterns from massive volumes of data, semantic indexing, data tagging, fast information retrieval, and simplifying discriminative tasks. We also investigate some aspects of Deep Learning research that need further exploration to incorporate specific challenges introduced by Big Data Analytics, including streaming data, high-dimensional data, scalability of models, and distributed computing. We conclude by presenting insights into relevant future works by posing some questions, including defining data sampling criteria, domain adaptation modeling, defining criteria for obtaining useful data abstractions, improving semantic indexing, semi-supervised learning, and active learning.",
"title": ""
}
] |
[
{
"docid": "17ac85242f7ee4bc4991e54403e827c4",
"text": "Over the last two decades, an impressive progress has been made in the identification of novel factors in the translocation machineries of the mitochondrial protein import and their possible roles. The role of lipids and possible protein-lipids interactions remains a relatively unexplored territory. Investigating the role of potential lipid-binding regions in the sub-units of the mitochondrial motor might help to shed some more light in our understanding of protein-lipid interactions mechanistically. Bioinformatics results seem to indicate multiple potential lipid-binding regions in each of the sub-units. The subsequent characterization of some of those regions in silico provides insight into the mechanistic functioning of this intriguing and essential part of the protein translocation machinery. Details about the way the regions interact with phospholipids were found by the use of Monte Carlo simulations. For example, Pam18 contains one possible transmembrane region and two tilted surface bound conformations upon interaction with phospholipids. The results demonstrate that the presented bioinformatics approach might be useful in an attempt to expand the knowledge of the possible role of protein-lipid interactions in the mitochondrial protein translocation process.",
"title": ""
},
{
"docid": "999d111ff3c65f48f63ee51bd2230172",
"text": "We present techniques for gathering data that expose errors of automatic predictive models. In certain common settings, traditional methods for evaluating predictive models tend to miss rare but important errors—most importantly, cases for which the model is confident of its prediction (but wrong). In this article, we present a system that, in a game-like setting, asks humans to identify cases that will cause the predictive model-based system to fail. Such techniques are valuable in discovering problematic cases that may not reveal themselves during the normal operation of the system and may include cases that are rare but catastrophic. We describe the design of the system, including design iterations that did not quite work. In particular, the system incentivizes humans to provide examples that are difficult for the model to handle by providing a reward proportional to the magnitude of the predictive model's error. The humans are asked to “Beat the Machine” and find cases where the automatic model (“the Machine”) is wrong. Experiments show that the humans using Beat the Machine identify more errors than do traditional techniques for discovering errors in predictive models, and, indeed, they identify many more errors where the machine is (wrongly) confident it is correct. Furthermore, those cases the humans identify seem to be not simply outliers, but coherent areas missed completely by the model. Beat the Machine identifies the “unknown unknowns.” Beat the Machine has been deployed at an industrial scale by several companies. The main impact has been that firms are changing their perspective on and practice of evaluating predictive models.\n “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are also unknown unknowns. There are things we don't know we don't know.”\n --Donald Rumsfeld",
"title": ""
},
{
"docid": "35586c00530db3fd928512134b4927ec",
"text": "Basic definitions concerning the multi-layer feed-forward neural networks are given. The back-propagation training algorithm is explained. Partial derivatives of the objective function with respect to the weight and threshold coefficients are derived. These derivatives are valuable for an adaptation process of the considered neural network. Training and generalisation of multi-layer feed-forward neural networks are discussed. Improvements of the standard back-propagation algorithm are reviewed. Example of the use of multi-layer feed-forward neural networks for prediction of carbon-13 NMR chemical shifts of alkanes is given. Further applications of neural networks in chemistry are reviewed. Advantages and disadvantages of multilayer feed-forward neural networks are discussed.",
"title": ""
},
{
"docid": "37552cc90e02204afdd362a7d5978047",
"text": "In this talk we introduce visible light communication and discuss challenges and techniques to improve the performance of white organic light emitting diode (OLED) based systems.",
"title": ""
},
{
"docid": "518cb733bfbb746315498c1409d118c5",
"text": "BACKGROUND\nAndrogenetic alopecia (AGA) is a common form of scalp hair loss that affects up to 50% of males between 18 and 40 years old. Several molecules are commonly used for the treatment of AGA, acting on different steps of its pathogenesis (Minoxidil, Finasteride, Serenoa repens) and show some side effects. In literature, on the basis of hypertrichosis observed in patients treated with analogues of prostaglandin PGF2a, it was supposed that prostaglandins would have an important role in the hair growth: PGE and PGF2a play a positive role, while PGD2 a negative one.\n\n\nOBJECTIVE\nWe carried out a pilot study to evaluate the efficacy of topical cetirizine versus placebo in patients with AGA.\n\n\nPATIENTS AND METHODS\nA sample of 85 patients was recruited, of which 67 were used to assess the effectiveness of the treatment with topical cetirizine, while 18 were control patients.\n\n\nRESULTS\nWe found that the main effect of cetirizine was an increase in total hair density, terminal hair density and diameter variation from T0 to T1, while the vellus hair density shows an evident decrease. The use of a molecule as cetirizine, with no notable side effects, makes possible a good compliance by patients.\n\n\nCONCLUSION\nOur results have shown that topical cetirizine 1% is responsible for a significant improvement of the initial framework of AGA.",
"title": ""
},
{
"docid": "4d2dad29f0f02d448c78b7beda529022",
"text": "This paper proposes a novel diagnosis method for detection and discrimination of two typical mechanical failures in induction motors by stator current analysis: load torque oscillations and dynamic rotor eccentricity. A theoretical analysis shows that each fault modulates the stator current in a different way: torque oscillations lead to stator current phase modulation, whereas rotor eccentricities produce stator current amplitude modulation. The use of traditional current spectrum analysis involves identical frequency signatures with the two fault types. A time-frequency analysis of the stator current with the Wigner distribution leads to different fault signatures that can be used for a more accurate diagnosis. The theoretical considerations and the proposed diagnosis techniques are validated on experimental signals.",
"title": ""
},
{
"docid": "411d64804b8271b426521db5769cdb6f",
"text": "This paper presents APT, a localization system for outdoor pedestrians with smartphones. APT performs better than the built-in GPS module of the smartphone in terms of accuracy. This is achieved by introducing a robust dead reckoning algorithm and an error-tolerant algorithm for map matching. When the user is walking with the smartphone, the dead reckoning algorithm monitors steps and walking direction in real time. It then reports new steps and turns to the map-matching algorithm. Based on updated information, this algorithm adjusts the user's location on a map in an error-tolerant manner. If location ambiguity among several routes occurs after adjustments, the GPS module is queried to help eliminate this ambiguity. Evaluations in practice show that the error of our system is less than 1/2 that of GPS.",
"title": ""
},
{
"docid": "fae9789def98f0f5813ed4a644043c2f",
"text": "---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Nowadays people are more interested to express and share their views, feedbacks, suggestions, and opinions about a particular topic on the web. People and company rely more on online opinions about products and services for their decision making. A major problem in identifying the opinion classification is high dimensionality of the feature space. Most of these features are irrelevant, redundant, and noisy which affects the performance of the classifier. Therefore, feature selection is an essential step in the fake review detection to reduce the dimensionality of the feature space and to improve accuracy. In this paper, binary artificial bee colony (BABC) with KNN is proposed to solve feature selection problem for sentiment classification. The experimental results demonstrate that the proposed method selects more informative features set compared to the competitive methods as it attains higher classification accuracy.",
"title": ""
},
{
"docid": "ef53a10864facc669b9ac5218f394bca",
"text": "Emotion hacking virtual reality (EH-VR) system is an interactive system that hacks one's heartbeat and controls it to accelerate scary VR experience. The EH-VR system provides vibrotactile biofeedback, which resembles a heartbeat, from the floor. The system determines false heartbeat frequency by detecting user's heart rate in real time and calculates false heart rate, which is faster than the one observed according to the quadric equation model. With the system, we demonstrate \"Pressure of unknown\" which is a CG VR space originally created to express the metaphor of scare. A user experiences this space by using a wheel chair as a controller to walk through a VR world displayed via HMD while receiving vibrotac-tile feedback of false heartbeat calculated from its own heart rate from the floor.",
"title": ""
},
{
"docid": "cb6704ade47db83a6338e43897d72956",
"text": "Renewable energy sources are essential paths towards sustainable development and CO2 emission reduction. For example, the European Union has set the target of achieving 22% of electricity generation from renewable sources by 2010. However, the extensive use of this energy source is being avoided by some technical problems as fouling and slagging in the surfaces of boiler heat exchangers. Although these phenomena were extensively studied in the last decades in order to optimize the behaviour of large coal power boilers, a simple, general and effective method for fouling control has not been developed. For biomass boilers, the feedstock variability and the presence of new components in ash chemistry increase the fouling influence in boiler performance. In particular, heat transfer is widely affected and the boiler capacity becomes dramatically reduced. Unfortunately, the classical approach of regular sootblowing cycles becomes clearly insufficient for them. Artificial Intelligence (AI) provides new means to undertake this problem. This paper illustrates a methodology based on Neural Networks (NNs) and Fuzzy-Logic Expert Systems to select the moment for activating sootblowing in an industrial biomass boiler. The main aim is to minimize the boiler energy and efficiency losses with a proper sootblowing activation. Although the NN type used in this work is well-known and the Hybrid Systems had been extensively used in the last decade, the excellent results obtained in the use of AI in industrial biomass boilers control with regard to previous approaches makes this work a novelty. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ce30d02bab5d67559f9220c1db11e80d",
"text": "Thalidomide was originally used to treat morning sickness, but was banned in the 1960s for causing serious congenital birth defects. Remarkably, thalidomide was subsequently discovered to have anti-inflammatory and anti-angiogenic properties, and was identified as an effective treatment for multiple myeloma. A series of immunomodulatory drugs — created by chemical modification of thalidomide — have been developed to overcome the original devastating side effects. Their powerful anticancer properties mean that these drugs are now emerging from thalidomide's shadow as useful anticancer agents.",
"title": ""
},
{
"docid": "acf4f5fa5ae091b5e72869213deb643e",
"text": "A key ingredient in the design of visual object classification systems is the identification of relevant class specific aspects while being robust to intra-class variations. While this is a necessity in order to generalize beyond a given set of training images, it is also a very difficult problem due to the high variability of visual appearance within each class. In the last years substantial performance gains on challenging benchmark datasets have been reported in the literature. This progress can be attributed to two developments: the design of highly discriminative and robust image features and the combination of multiple complementary features based on different aspects such as shape, color or texture. In this paper we study several models that aim at learning the correct weighting of different features from training data. These include multiple kernel learning as well as simple baseline methods. Furthermore we derive ensemble methods inspired by Boosting which are easily extendable to several multiclass setting. All methods are thoroughly evaluated on object classification datasets using a multitude of feature descriptors. The key results are that even very simple baseline methods, that are orders of magnitude faster than learning techniques are highly competitive with multiple kernel learning. Furthermore the Boosting type methods are found to produce consistently better results in all experiments. We provide insight of when combination methods can be expected to work and how the benefit of complementary features can be exploited most efficiently.",
"title": ""
},
{
"docid": "e573cffab31721c14725de3e2608eabf",
"text": "Sketching on paper is a quick and easy way to communicate ideas. However, many sketch-based systems require people to draw in contrived ways instead of sketching freely as they would on paper. NaturaSketch affords a more natural interface through multiple strokes that overlap, cross, and connect. It also features a meshing algorithm to support multiple strokes of different classifications, which lets users design complex 3D shapes from sketches drawn over existing images. To provide a familiar workflow for object design, a set of sketch annotations can also specify modeling and editing operations. NaturaSketch empowers designers to produce a variety of models quickly and easily.",
"title": ""
},
{
"docid": "8e302428a1fd6f7331f5546c22bf4d73",
"text": "Automatic extraction of synonyms and/or semantically related words has various applications in Natural Language Processing (NLP). There are currently two mainstream extraction paradigms, namely, lexicon-based and distributional approaches. The former usually suffers from low coverage, while the latter is only able to capture general relatedness rather than strict synonymy. In this paper, two rule-based extraction methods are applied to definitions from a machine-readable dictionary. Extracted synonyms are evaluated in two experiments by solving TOEFL synonym questions and being compared against existing thesauri. The proposed approaches have achieved satisfactory results in both evaluations, comparable to published studies or even the state of the art.",
"title": ""
},
{
"docid": "597f097d5206fc259224b905d4d20e20",
"text": "W e present here a QT database designed j b r evaluation of algorithms that detect waveform boundaries in the EGG. T h e dataabase consists of 105 fifteen-minute excerpts of two-channel ECG Holter recordings, chosen to include a broad variety of QRS and ST-T morphologies. Waveform bounda,ries for a subset of beats in, these recordings have been manually determined by expert annotators using a n interactive graphic disp1a.y to view both signals simultaneously and to insert the annotations. Examples of each m,orvhologg were inchded in this subset of uniaotated beats; at least 30 beats in each record, 3622 beats in all, were manually a:anotated in Ihe database. In 11 records, two indepen,dent sets of ennotations have been inchded, to a.llow inter-observer variability slwdies. T h e Q T Databnse is available on a CD-ROM in the format previously used for the MIT-BJH Arrhythmia Database ayad the Euro-pean ST-T Database, from which some of the recordings in the &T Database have been obtained.",
"title": ""
},
{
"docid": "81814a3ac70e4a2317596185e78e76ef",
"text": "Cardiac complications are common after non-cardiac surgery. Peri-operative myocardial infarction occurs in 3% of patients undergoing major surgery. Recently, however, our understanding of the epidemiology of these cardiac events has broadened to include myocardial injury after non-cardiac surgery, diagnosed by an asymptomatic troponin rise, which also carries a poor prognosis. We review the causation of myocardial injury after non-cardiac surgery, with potential for prevention and treatment, based on currently available international guidelines and landmark studies. Postoperative arrhythmias are also a frequent cause of morbidity, with atrial fibrillation and QT-prolongation having specific relevance to the peri-operative period. Postoperative systolic heart failure is rare outside of myocardial infarction or cardiac surgery, but the impact of pre-operative diastolic dysfunction and its ability to cause postoperative heart failure is increasingly recognised. The latest evidence regarding diastolic dysfunction and the impact on non-cardiac surgery are examined to help guide fluid management for the non-cardiac anaesthetist.",
"title": ""
},
{
"docid": "95395c693b4cdfad722ae0c3545f45ef",
"text": "Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.",
"title": ""
},
{
"docid": "e881c2ab6abc91aa8e7cbe54d861d36d",
"text": "Tracing traffic using commodity hardware in contemporary highspeed access or aggregation networks such as 10-Gigabit Ethernet is an increasingly common yet challenging task. In this paper we investigate if today’s commodity hardware and software is in principle able to capture traffic from a fully loaded Ethernet. We find that this is only possible for data rates up to 1 Gigabit/s without reverting to using special hardware due to, e. g., limitations with the current PC buses. Therefore, we propose a novel way for monitoring higher speed interfaces (e. g., 10-Gigabit) by distributing their traffic across a set of lower speed interfaces (e. g., 1-Gigabit). This opens the next question: which system configuration is capable of monitoring one such 1-Gigabit/s interface? To answer this question we present a methodology for evaluating the performance impact of different system components including different CPU architectures and different operating system. Our results indicate that the combination of AMD Opteron with FreeBSD outperforms all others, independently of running in singleor multi-processor mode. Moreover, the impact of packet filtering, running multiple capturing applications, adding per packet analysis load, saving the captured packets to disk, and using 64-bit OSes is investigated.",
"title": ""
},
{
"docid": "0afbce731c55b9a3d3ced22ad59aa0ef",
"text": "In this paper, we introduce a method that automatically builds text classifiers in a new language by training on already labeled data in another language. Our method transfers the classification knowledge across languages by translating the model features and by using an Expectation Maximization (EM) algorithm that naturally takes into account the ambiguity associated with the translation of a word. We further exploit the readily available unlabeled data in the target language via semisupervised learning, and adapt the translated model to better fit the data distribution of the target language.",
"title": ""
}
] |
scidocsrr
|
13417db90bdc029bcd23aa456926d2ad
|
Secret Intelligence Service Room No . 15 Acting like a Tough Guy : Violent-Sexist Video Games , Identification with Game Characters , Masculine Beliefs , and Empathy for Female Violence Victims
|
[
{
"docid": "0a8763b4ba53cf488692d1c7c6791ab4",
"text": "a r t i c l e i n f o To address the longitudinal relation between adolescents' habitual usage of media violence and aggressive behavior and empathy, N = 1237 seventh and eighth grade high school students in Germany completed measures of violent and nonviolent media usage, aggression, and empathy twice in twelve months. Cross-lagged panel analyses showed significant pathways from T1 media violence usage to higher physical aggression and lower empathy at T2. The reverse paths from T1 aggression or empathy to T2 media violence usage were nonsignificant. The links were similar for boys and girls. No links were found between exposure to nonviolent media and aggression or between violent media and relational aggression. T1 physical aggression moderated the impact of media violence usage, with stronger effects of media violence usage among the low aggression group. Introduction Despite the rapidly growing body of research addressing the potentially harmful effects of exposure to violent media, the evidence currently available is still limited in several ways. First, there is a shortage of longitudinal research examining the associations of media violence usage and aggression over time. Such evidence is crucial for examining hypotheses about the causal directions of observed co-variations of media violence usage and aggression that cannot be established on the basis of cross-sectional research. Second, most of the available longitudinal evidence has focused on aggression as the critical outcome variable, giving comparatively little attention to other potentially harmful effects, such as a decrease in empathy with others in distress. Third, the vast majority of studies available to date were conducted in North America. However, even in the age of globalization, patterns of media violence usage and their cultural contexts may vary considerably, calling for a wider database from different countries to examine the generalizability of results to address each of these aspects. It presents findings from a longitudinal study with a large sample of early adolescents in Germany, relating habitual usage of violence in movies, TV series, and interactive video games to self-reports of physical aggression and empathy over a period of twelve months. The study focused on early adolescence as a developmental period characterized by a confluence of risk factors as a result of biological, psychological, and social changes for a range of adverse outcomes. Regular media violence usage may significantly contribute to the overall risk of aggression as one such negative outcome. Media consumption increases from childhood …",
"title": ""
}
] |
[
{
"docid": "8780b620d228498447c4f1a939fa5486",
"text": "A new mechanism is proposed for securing a blockchain applied to contracts management such as digital rights management. This mechanism includes a new consensus method using a credibility score and creates a hybrid blockchain by alternately using this new method and proof-of-stake. This makes it possible to prevent an attacker from monopolizing resources and to keep securing blockchains.",
"title": ""
},
{
"docid": "92625cb17367de65a912cb59ea767a19",
"text": "With the fast progression of data exchange in electronic way, information security is becoming more important in data storage and transmission. Because of widely using images in industrial process, it is important to protect the confidential image data from unauthorized access. In this paper, we analyzed current image encryption algorithms and compression is added for two of them (Mirror-like image encryption and Visual Cryptography). Implementations of these two algorithms have been realized for experimental purposes. The results of analysis are given in this paper. Keywords—image encryption, image cryptosystem, security, transmission.",
"title": ""
},
{
"docid": "702df543119d648be859233bfa2b5d03",
"text": "We review more than 200 applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feed-forward neural networks, Kohonen feature maps and Hop1eld neural networks. The various applications are categorised into a novel two-dimensional taxonomy for image processing algorithms. One dimension speci1es the type of task performed by the algorithm: preprocessing, data reduction=feature extraction, segmentation, object recognition, image understanding and optimisation. The other dimension captures the abstraction level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level,ion level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level, object-set-level and scene characterisation. Each of the six types of tasks poses speci1c constraints to a neural-based approach. These speci1c conditions are discussed in detail. A synthesis is made of unresolved problems related to the application of pattern recognition techniques in image processing and speci1cally to the application of neural networks. Finally, we present an outlook into the future application of neural networks and relate them to novel developments. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "dd4750b43931b3b09a5e95eaa74455d1",
"text": "In viticulture, there are several applications where bud detection in vineyard images is a necessary task, susceptible of being automated through the use of computer vision methods. A common and effective family of visual detection algorithms are the scanning-window type, that slide a (usually) fixed size window along the original image, classifying each resulting windowed-patch as containing or not containing the target object. The simplicity of these algorithms finds its most challenging aspect in the classification stage. Interested in grapevine buds detection in natural field conditions, this paper presents a classification method for images of grapevine buds ranging 100 to 1600 pixels in diameter, captured in outdoor, under natural field conditions, in winter (i.e., no grape bunches, very few leaves, and dormant buds), without artificial background, and with minimum equipment requirements. The proposed method uses well-known computer vision technologies: Scale-Invariant Feature Transform for calculating low-level features, Bag of Features for building an image descriptor, and Support Vector Machines for training a classifier. When evaluated over images containing buds of at least 100 pixels in diameter, the approach achieves a recall higher than 0.9 and a precision of 0.86 over all windowed-patches covering the whole bud and down to 60% of it, and scaled up to window patches containing a proportion of 20%-80% of bud versus background pixels. This robustness on the position and size of the window demonstrates its viability for use as the classification stage in a scanning-window detection algorithms.",
"title": ""
},
{
"docid": "26a599c22c173f061b5d9579f90fd888",
"text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto",
"title": ""
},
{
"docid": "f3459ff684d6309ac773c20e03f86183",
"text": "We propose an algorithm to separate simultaneously speaking persons from each other, the “cocktail party problem”, using a single microphone. Our approach involves a deep recurrent neural networks regression to a vector space that is descriptive of independent speakers. Such a vector space can embed empirically determined speaker characteristics and is optimized by distinguishing between speaker masks. We call this technique source-contrastive estimation. The methodology is inspired by negative sampling, which has seen success in natural language processing, where an embedding is learned by correlating and decorrelating a given input vector with output weights. Although the matrix determined by the output weights is dependent on a set of known speakers, we only use the input vectors during inference. Doing so will ensure that source separation is explicitly speaker-independent. Our approach is similar to recent deep neural network clustering and permutation-invariant training research; we use weighted spectral features and masks to augment individual speaker frequencies while filtering out other speakers. We avoid, however, the severe computational burden of other approaches with our technique. Furthermore, by training a vector space rather than combinations of different speakers or differences thereof, we avoid the so-called permutation problem during training. Our algorithm offers an intuitive, computationally efficient response to the cocktail party problem, and most importantly boasts better empirical performance than other current techniques.",
"title": ""
},
{
"docid": "8741e414199ecfbbf4a4c16d8a303ab5",
"text": "In ophthalmic artery occlusion by hyaluronic acid injection, the globe may get worse by direct intravitreal administration of hyaluronidase. Retrograde cannulation of the ophthalmic artery may have the potential for restoration of retinal perfusion and minimizing the risk of phthisis bulbi. The study investigated the feasibility of cannulation of the ophthalmic artery for retrograde injection. In 10 right orbits of 10 cadavers, cannulation and ink injection of the supraorbital artery in the supraorbital approach were performed under surgical loupe magnification. In 10 left orbits, the medial upper lid was curvedly incised to retrieve the retroseptal ophthalmic artery for cannulation by a transorbital approach. Procedural times were recorded. Diameters of related arteries were bilaterally measured for comparison. Dissections to verify dye distribution were performed. Cannulation was successfully performed in 100 % and 90 % of the transorbital and the supraorbital approaches, respectively. The transorbital approach was more practical to perform compared with the supraorbital approach due to a trend toward a short procedure time (18.4 ± 3.8 vs. 21.9 ± 5.0 min, p = 0.74). The postseptal ophthalmic artery exhibited a tortious course, easily retrieved and cannulated, with a larger diameter compared to the supraorbital artery (1.25 ± 0.23 vs. 0.84 ± 0.16 mm, p = 0.000). The transorbital approach is more practical than the supraorbital approach for retrograde cannulation of the ophthalmic artery. This study provides a reliable access route implication for hyaluronidase injection into the ophthalmic artery to salvage central retinal occlusion following hyaluronic acid injection. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .",
"title": ""
},
{
"docid": "d6602316a4b1062c177b719fc4985084",
"text": "Agricultural residues, such as lignocellulosic materials (LM), are the most attractive renewable bioenergy sources and are abundantly found in nature. Anaerobic digestion has been extensively studied for the effective utilization of LM for biogas production. Experimental investigation of physiochemical changes that occur during pretreatment is needed for developing mechanistic and effective models that can be employed for the rational design of pretreatment processes. Various-cutting edge pretreatment technologies (physical, chemical and biological) are being tested on the pilot scale. These different pretreatment methods are widely described in this paper, among them, microaerobic pretreatment (MP) has gained attention as a potential pretreatment method for the degradation of LM, which just requires a limited amount of oxygen (or air) supplied directly during the pretreatment step. MP involves microbial communities under mild conditions (temperature and pressure), uses fewer enzymes and less energy for methane production, and is probably the most promising and environmentally friendly technique in the long run. Moreover, it is technically and economically feasible to use microorganisms instead of expensive chemicals, biological enzymes or mechanical equipment. The information provided in this paper, will endow readers with the background knowledge necessary for finding a promising solution to methane production.",
"title": ""
},
{
"docid": "cc6cf6557a8be12d8d3a4550163ac0a9",
"text": "In this study, different S/D contacting options for lateral NWFET devices are benchmarked at 7nm node dimensions and beyond. Comparison is done at both DC and ring oscillator levels. It is demonstrated that implementing a direct contact to a fin made of Si/SiGe super-lattice results in 13% performance improvement. Also, we conclude that the integration of internal spacers between the NWs is a must for lateral NWFETs in order to reduce device parasitic capacitance.",
"title": ""
},
{
"docid": "c487d81718a194dc008c3f652a2f9b14",
"text": "In robotics, lower-level controllers are typically used to make the robot solve a specific task in a fixed context. For example, the lower-level controller can encode a hitting movement while the context defines the target coordinates to hit. However, in many learning problems the context may change between task executions. To adapt the policy to a new context, we utilize a hierarchical approach by learning an upper-level policy that generalizes the lower-level controllers to new contexts. A common approach to learn such upper-level policies is to use policy search. However, the majority of current contextual policy search approaches are model-free and require a high number of interactions with the robot and its environment. Model-based approaches are known to significantly reduce the amount of robot experiments, however, current model-based techniques cannot be applied straightforwardly to the problem of learning contextual upper-level policies. They rely on specific parametrizations of the policy and the reward function, which are often unrealistic in the contextual policy search formulation. In this paper, we propose a novel model-based contextual policy search algorithm that is able to generalize lower-level controllers, and is data-efficient. Our approach is based on learned probabilistic forward models and information theoretic policy search. Unlike current algorithms, our method does not require any assumption on the parametrization of the policy or the reward function. We show on complex simulated robotic tasks and in a real robot experiment that the proposed learning framework speeds up the learning process by up to two orders of magnitude in comparison to existing methods, while learning high quality policies.",
"title": ""
},
{
"docid": "f80f1952c5b58185b261d53ba9830c47",
"text": "This paper presents a new class of thin, dexterous continuum robots, which we call active cannulas due to their potential medical applications. An active cannula is composed of telescoping, concentric, precurved superelastic tubes that can be axially translated and rotated at the base relative to one another. Active cannulas derive bending not from tendon wires or other external mechanisms but from elastic tube interaction in the backbone itself, permitting high dexterity and small size, and dexterity improves with miniaturization. They are designed to traverse narrow and winding environments without relying on ldquoguidingrdquo environmental reaction forces. These features seem ideal for a variety of applications where a very thin robot with tentacle-like dexterity is needed. In this paper, we apply beam mechanics to obtain a kinematic model of active cannula shape and describe design tools that result from the modeling process. After deriving general equations, we apply them to a simple three-link active cannula. Experimental results illustrate the importance of including torsional effects and the ability of our model to predict energy bifurcation and active cannula shape.",
"title": ""
},
{
"docid": "781ebbf85a510cfd46f0c824aa4aba7e",
"text": "Human activity recognition (HAR) is an important research area in the fields of human perception and computer vision due to its wide range of applications. These applications include: intelligent video surveillance, ambient assisted living, human computer interaction, human-robot interaction, entertainment, and intelligent driving. Recently, with the emergence and successful deployment of deep learning techniques for image classification, researchers have migrated from traditional handcrafting to deep learning techniques for HAR. However, handcrafted representation-based approaches are still widely used due to some bottlenecks such as computational complexity of deep learning techniques for activity recognition. However, approaches based on handcrafted representation are not able to handle complex scenarios due to their limitations and incapability; therefore, resorting to deep learning-based techniques is a natural option. This review paper presents a comprehensive survey of both handcrafted and learning-based action representations, offering comparison, analysis, and discussions on these approaches. In addition to this, the well-known public datasets available for experimentations and important applications of HAR are also presented to provide further insight into the field. This is the first review paper of its kind which presents all these aspects of HAR in a single review article with comprehensive coverage of each part. Finally, the paper is concluded with important discussions and research directions in the domain of HAR.",
"title": ""
},
{
"docid": "9e0cbbe8d95298313fd929a7eb2bfea9",
"text": "We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology.",
"title": ""
},
{
"docid": "4a1a1b3012f2ce941cc532a55b49f09b",
"text": "Gamification informally refers to making a system more game-like. More specifically, gamification denotes applying game mechanics to a non-game system. We theorize that gamification success depends on the game mechanics employed and their effects on user motivation and immersion. The proposed theory may be tested using an experiment or questionnaire study.",
"title": ""
},
{
"docid": "a91d8e09082836bca10b003ef5f7ceff",
"text": "Mininet is network emulation software that allows launching a virtual network with switches, hosts and an SDN controller all with a single command on a single Linux kernel. It is a great way to start learning about SDN and Open-Flow as well as test SDN controller and SDN applications. Mininet can be used to deploy large networks on a single computer or virtual machine provided with limited resources. It is freely available open source software that emulates Open-Flow device and SDN controllers. Keywords— SDN, Mininet, Open-Flow, Python, Wireshark",
"title": ""
},
{
"docid": "853b5ab3ed6a9a07c8d11ad32d0e58ad",
"text": "We introduce a new statistical model for time series that iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes. This model combines and generalizes two of the most widely used stochastic time-series modelshidden Markov models and linear dynamical systemsand is closely related to models that are widely used in the control and econometrics literatures. It can also be derived by extending the mixture of experts neural network (Jacobs, Jordan, Nowlan, & Hinton, 1991) to its fully dynamical version, in which both expert and gating networks are recurrent. Inferring the posterior probabilities of the hidden states of this model is computationally intractable, and therefore the exact expectation maximization (EM) algorithm cannot be applied. However, we present a variational approximation that maximizes a lower bound on the log-likelihood and makes use of both the forward and backward recursions for hidden Markov models and the Kalman filter recursions for linear dynamical systems. We tested the algorithm on artificial data sets and a natural data set of respiration force from a patient with sleep apnea. The results suggest that variational approximations are a viable method for inference and learning in switching state-space models.",
"title": ""
},
{
"docid": "24bb26da0ce658ff075fc89b73cad5af",
"text": "Recent trends in robot learning are to use trajectory-based optimal control techniques and reinforcement learning to scale complex robotic systems. On the one hand, increased computational power and multiprocessing, and on the other hand, probabilistic reinforcement learning methods and function approximation, have contributed to a steadily increasing interest in robot learning. Imitation learning has helped significantly to start learning with reasonable initial behavior. However, many applications are still restricted to rather lowdimensional domains and toy applications. Future work will have to demonstrate the continual and autonomous learning abilities, which were alluded to in the introduction.",
"title": ""
},
{
"docid": "cc3c8ac3c1f0c6ffae41e70a88dc929d",
"text": "Many blockchain-based cryptocurrencies such as Bitcoin and Ethereum use Nakamoto consensus protocol to reach agreement on the blockchain state between a network of participant nodes. The Nakamoto consensus protocol probabilistically selects a leader via a mining process which rewards network participants (or miners) to solve computational puzzles. Finding solutions for such puzzles requires an enormous amount of computation. Thus, miners often aggregate resources into pools and share rewards amongst all pool members via pooled mining protocol. Pooled mining helps reduce the variance of miners’ payoffs significantly and is widely adopted in popular cryptocurrencies. For example, as of this writing, more than 95% of mining power in Bitcoin emanates from 10 mining pools. Although pooled mining benefits miners, it severely degrades decentralization, since a centralized pool manager administers the pooling protocol. Furthermore, pooled mining increases the transaction censorship significantly since pool managers decide which transactions are included in blocks. Due to this widely recognized threat, the Bitcoin community has proposed an alternative called P2Pool which decentralizes the operations of the pool manager. However, P2Pool is inefficient, increases the variance of miners’ rewards, requires much more computation and bandwidth from miners, and has not gained wide adoption. In this work, we propose a new protocol design for a decentralized mining pool. Our protocol called SMARTPOOL shows how one can leverage smart contracts, which are autonomous agents themselves running on decentralized blockchains, to decentralize cryptocurrency mining. SMARTPOOL guarantees high security, low reward’s variance for miners and is cost-efficient. We implemented a prototype of SMARTPOOL as an Ethereum smart contract working as a decentralized mining pool for Bitcoin. We have deployed it on the Ethereum testnet and our experiments confirm that SMARTPOOL is efficient and ready for practical use.",
"title": ""
},
{
"docid": "fc3c4f6c413719bbcf7d13add8c3d214",
"text": "Disentangling the effects of selection and influence is one of social science's greatest unsolved puzzles: Do people befriend others who are similar to them, or do they become more similar to their friends over time? Recent advances in stochastic actor-based modeling, combined with self-reported data on a popular online social network site, allow us to address this question with a greater degree of precision than has heretofore been possible. Using data on the Facebook activity of a cohort of college students over 4 years, we find that students who share certain tastes in music and in movies, but not in books, are significantly likely to befriend one another. Meanwhile, we find little evidence for the diffusion of tastes among Facebook friends-except for tastes in classical/jazz music. These findings shed light on the mechanisms responsible for observed network homogeneity; provide a statistically rigorous assessment of the coevolution of cultural tastes and social relationships; and suggest important qualifications to our understanding of both homophily and contagion as generic social processes.",
"title": ""
},
{
"docid": "d41703226184b92ef3f7feb501aa4c9b",
"text": "The first RADAR patent was applied for by Christian Huelsmeyer on April 30, 1904 at the patent office in Berlin, Germany. He was motivated by a ship accident on the river Weser and called his experimental system ”Telemobiloscope”. In this chapter some important and modern topics in radar system design and radar signal processing will be discussed. Waveform design is one innovative topic where new results are available for special applications like automotive radar. Detection theory is a fundamental radar topic which will be discussed in this chapter for new range CFAR schemes which are essential for all radar systems. Target recognition has for many years been the dream of all radar engineers. New results for target classification will be discussed for some automotive radar sensors.",
"title": ""
}
] |
scidocsrr
|
e61322adaf96eaa05e3ccd3121049e27
|
Fitness Gamification : Concepts , Characteristics , and Applications
|
[
{
"docid": "0c7afb3bee6dd12e4a69632fbdb50ce8",
"text": "OBJECTIVES\nTo systematically review levels of metabolic expenditure and changes in activity patterns associated with active video game (AVG) play in children and to provide directions for future research efforts.\n\n\nDATA SOURCES\nA review of the English-language literature (January 1, 1998, to January 1, 2010) via ISI Web of Knowledge, PubMed, and Scholars Portal using the following keywords: video game, exergame, physical activity, fitness, exercise, energy metabolism, energy expenditure, heart rate, disability, injury, musculoskeletal, enjoyment, adherence, and motivation.\n\n\nSTUDY SELECTION\nOnly studies involving youth (< or = 21 years) and reporting measures of energy expenditure, activity patterns, physiological risks and benefits, and enjoyment and motivation associated with mainstream AVGs were included. Eighteen studies met the inclusion criteria. Articles were reviewed and data were extracted and synthesized by 2 independent reviewers. MAIN OUTCOME EXPOSURES: Energy expenditure during AVG play compared with rest (12 studies) and activity associated with AVG exposure (6 studies).\n\n\nMAIN OUTCOME MEASURES\nPercentage increase in energy expenditure and heart rate (from rest).\n\n\nRESULTS\nActivity levels during AVG play were highly variable, with mean (SD) percentage increases of 222% (100%) in energy expenditure and 64% (20%) in heart rate. Energy expenditure was significantly lower for games played primarily through upper body movements compared with those that engaged the lower body (difference, -148%; 95% confidence interval, -231% to -66%; P = .001).\n\n\nCONCLUSIONS\nThe AVGs enable light to moderate physical activity. Limited evidence is available to draw conclusions on the long-term efficacy of AVGs for physical activity promotion.",
"title": ""
},
{
"docid": "5e7a06213a32e0265dcb8bc11a5bb3f1",
"text": "The global obesity epidemic has prompted our community to explore the potential for technology to play a stronger role in promoting healthier lifestyles. Although there are several examples of successful games based on focused physical interaction, persuasive applications that integrate into everyday life have had more mixed results. This underscores a need for designs that encourage physical activity while addressing fun, sustainability, and behavioral change. This note suggests a new perspective, inspired in part by the social nature of many everyday fitness applications and by the successful encouragement of long term play in massively multiplayer online games. We first examine the game design literature to distill a set of principles for discussing and comparing applications. We then use these principles to analyze an existing application. Finally, we present Kukini, a design for an everyday fitness game.",
"title": ""
}
] |
[
{
"docid": "eb8321467458401aa86398390c32ae00",
"text": "As the wide popularization of online social networks, online users are not content only with keeping online friendship with social friends in real life any more. They hope the system designers can help them exploring new friends with common interest. However, the large amount of online users and their diverse and dynamic interests possess great challenges to support such a novel feature in online social networks. In this paper, by leveraging interest-based features, we design a general friend recommendation framework, which can characterize user interest in two dimensions: context (location, time) and content, as well as combining domain knowledge to improve recommending quality. We also design a potential friend recommender system in a real online social network of biology field to show the effectiveness of our proposed framework.",
"title": ""
},
{
"docid": "ac222a5f8784d7a5563939077c61deaa",
"text": "Cyber-Physical Systems (CPS) are integrations of computation with physical processes. Embedded computers and networks monitor and control the physical processes, usually with feedback loops where physical processes affect computations and vice versa. In the physical world, the passage of time is inexorable and concurrency is intrinsic. Neither of these properties is present in today’s computing and networking abstractions. I argue that the mismatch between these abstractions and properties of physical processes impede technical progress, and I identify promising technologies for research and investment. There are technical approaches that partially bridge the abstraction gap today (such as real-time operating systems, middleware technologies, specialized embedded processor architectures, and specialized networks), and there is certainly considerable room for improvement of these technologies. However, it may be that we need a less incremental approach, where new abstractions are built from the ground up. The foundations of computing are built on the premise that the principal task of computers is transformation of data. Yet we know that the technology is capable of far richer interactions the physical world. I critically examine the foundations that have been built over the last several decades, and determine where the technology and theory bottlenecks and opportunities lie. I argue for a new systems science that is jointly physical and computational.",
"title": ""
},
{
"docid": "4d9f0cf629cd3695a2ec249b81336d28",
"text": "We introduce an over-sketching interface for feature-preserving surface mesh editing. The user sketches a stroke that is the suggested position of part of a silhouette of the displayed surface. The system then segments all image-space silhouettes of the projected surface, identifies among all silhouette segments the best matching part, derives vertices in the surface mesh corresponding to the silhouette part, selects a sub-region of the mesh to be modified, and feeds appropriately modified vertex positions together with the sub-mesh into a mesh deformation tool. The overall algorithm has been designed to enable interactive modification of the surface --- yielding a surface editing system that comes close to the experience of sketching 3D models on paper.",
"title": ""
},
{
"docid": "4ee5931bf57096913f7e13e5da0fbe7e",
"text": "The design of an ultra wideband aperture-coupled vertical microstrip-microstrip transition is presented. The proposed transition exploits broadside coupling between exponentially tapered microstrip patches at the top and bottom layers via an exponentially tapered slot at the mid layer. The theoretical analysis indicates that the best performance concerning the insertion loss and the return loss over the maximum possible bandwidth can be achieved when the coupling factor is equal to 0.75 (or 2.5 dB). The calculated and simulated results show that the proposed transition has a linear phase performance, an important factor for distortionless pulse operation, with less than 0.4 dB insertion loss and more than 17 dB return loss across the frequency band 3.1 GHz to 10.6 GHz.",
"title": ""
},
{
"docid": "8a224bf0376321caa30a95318ec9ecf9",
"text": "With the rapid development of very large scale integration (VLSI) and continuous scaling in the metal oxide semiconductor field effect transistor (MOSFET), pad corrosion in the aluminum (Al) pad surface has become practical concern in the semiconductor industry. This paper presents a new method to improve the pad corrosion on Al pad surface by using new Al/Ti/TiN film stack. The effects of different Al film stacks on the Al pad corrosion have been investigated. The experiment results show that the Al/Ti/TiN film stack could improve bond pad corrosion effectively comparing to Al/SiON film stack. Wafers processed with new Al film stack were stored up to 28 days and display no pad crystal (PDCY) defects on bond pad surfaces.",
"title": ""
},
{
"docid": "f073abd94a9c5853e561439de35ac9bd",
"text": "Evolutionary learning is one of the most popular techniques for designing quantitative investment (QI) products. Trend following (TF) strategies, owing to their briefness and efficiency, are widely accepted by investors. Surprisingly, to the best of our knowledge, no related research has investigated TF investment strategies within an evolutionary learning model. This paper proposes a hybrid long-term and short-term evolutionary trend following algorithm (eTrend) that combines TF investment strategies with the eXtended Classifier Systems (XCS). The proposed eTrend algorithm has two advantages: (1) the combination of stock investment strategies (i.e., TF) and evolutionary learning (i.e., XCS) can significantly improve computation effectiveness and model practicability, and (2) XCS can automatically adapt to market directions and uncover reasonable and understandable trading rules for further analysis, which can help avoid the irrational trading behaviors of common investors. To evaluate eTrend, experiments are carried out using the daily trading data stream of three famous indexes in the Shanghai Stock Exchange. Experimental results indicate that eTrend outperforms the buy-and-hold strategy with high Sortino ratio after the transaction cost. Its performance is also superior to the decision tree and artificial neural network trading models. Furthermore, as the concept drift phenomenon is common in the stock market, an exploratory concept drift analysis is conducted on the trading rules discovered in bear and bull market phases. The analysis revealed interesting and rational results. In conclusion, this paper presents convincing evidence that the proposed hybrid trend following model can indeed generate effective trading guid-",
"title": ""
},
{
"docid": "0e068a4e7388ed456de4239326eb9b08",
"text": "The Web so far has been incredibly successful at delivering information to human users. So successful actually, that there is now an urgent need to go beyond a browsing human. Unfortunately, the Web is not yet a well organized repository of nicely structured documents but rather a conglomerate of volatile HTML pages. To address this problem, we present the World Wide Web Wrapper Factory (W4F), a toolkit for the generation of wrappers for Web sources, that offers: (1) an expressive language to specify the extraction of complex structures from HTML pages; (2) a declarative mapping to various data formats like XML; (3) some visual tools to make the engineering of wrappers faster and easier.",
"title": ""
},
{
"docid": "52d3d3bf1f29e254cbb89c64f3b0d6b5",
"text": "Large projects are increasingly adopting agile development practices, and this raises new challenges for research. The workshop on principles of large-scale agile development focused on central topics in large-scale: the role of architecture, inter-team coordination, portfolio management and scaling agile practices. We propose eight principles for large-scale agile development, and present a revised research agenda.",
"title": ""
},
{
"docid": "748eae887bcda0695cbcf1ba1141dd79",
"text": "A wideband bandpass filter (BPF) with reconfigurable bandwidth (BW) is proposed based on a parallel-coupled line structure and a cross-shaped resonator with open stubs. The p-i-n diodes are used as the tuning elements, which can implement three reconfigurable BW states. The prototype of the designed filter reports an absolute BW tuning range of 1.22 GHz, while the fractional BW is varied from 34.8% to 56.5% when centered at 5.7 GHz. The simulation and measured results are in good agreement. Comparing with previous works, the proposed reconfigurable BPF features wider BW tuning range with maximum number of tuning states.",
"title": ""
},
{
"docid": "393711bcd1a8666210e125fb4295e158",
"text": "The purpose of a Beyond 4G (B4G) radio access technology, is to cope with the expected exponential increase of mobile data traffic in local area (LA). The requirements related to physical layer control signaling latencies and to hybrid ARQ (HARQ) round trip time (RTT) are in the order of ~1ms. In this paper, we propose a flexible orthogonal frequency division multiplexing (OFDM) based time division duplex (TDD) physical subframe structure optimized for B4G LA environment. We show that the proposed optimizations allow very frequent link direction switching, thus reaching the tight B4G HARQ RTT requirement and significant control signaling latency reductions compared to existing LTE-Advanced and WiMAX technologies.",
"title": ""
},
{
"docid": "310f13dac8d7cf2d1b40878ef6ce051b",
"text": "Traffic Accidents are occurring due to development of automobile industry and the accidents are unavoidable even the traffic rules are very strictly maintained. Data mining algorithm is applied to model the traffic accident injury level by using traffic accident dataset. It helped by obtaining the characteristics of drivers behavior, road condition and weather condition, Accident severity that are connected with different injury severities and death. This paper presents some models to predict the severity of injury using some data mining algorithms. The study focused on collecting the real data from previous research and obtains the injury severity level of traffic accident data.",
"title": ""
},
{
"docid": "ea05a43abee762d4b484b5027e02a03a",
"text": "One essential task in information extraction from the medical corpus is drug name recognition. Compared with text sources come from other domains, the medical text mining poses more challenges, for example, more unstructured text, the fast growing of new terms addition, a wide range of name variation for the same drug, the lack of labeled dataset sources and external knowledge, and the multiple token representations for a single drug name. Although many approaches have been proposed to overwhelm the task, some problems remained with poor F-score performance (less than 0.75). This paper presents a new treatment in data representation techniques to overcome some of those challenges. We propose three data representation techniques based on the characteristics of word distribution and word similarities as a result of word embedding training. The first technique is evaluated with the standard NN model, that is, MLP. The second technique involves two deep network classifiers, that is, DBN and SAE. The third technique represents the sentence as a sequence that is evaluated with a recurrent NN model, that is, LSTM. In extracting the drug name entities, the third technique gives the best F-score performance compared to the state of the art, with its average F-score being 0.8645.",
"title": ""
},
{
"docid": "55fc836c8b0f10486aa6d969d0cae14d",
"text": "In this manuscript we explore the ways in which the marketplace metaphor resonates with online dating participants and how this conceptual framework influences how they assess themselves, assess others, and make decisions about whom to pursue. Taking a metaphor approach enables us to highlight the ways in which participants’ language shapes their self-concept and interactions with potential partners. Qualitative analysis of in-depth interviews with 34 participants from a large online dating site revealed that the marketplace metaphor was salient for participants, who employed several strategies that reflected the assumptions underlying the marketplace perspective (including resisting the metaphor). We explore the implications of this metaphor for romantic relationship development, such as the objectification of potential partners. Journal of Social and Personal Relationships © The Author(s), 2010. Reprints and permissions: sagepub.co.uk/journalsPermissions.nav, Vol. 27(4): 427–447. DOI: 10.1177/0265407510361614 This research was funded by Affirmative Action Grant 111579 from the Office of Research and Sponsored Programs at California State University, Stanislaus. An earlier version of this paper was presented at the International Communication Association, 2005. We would like to thank Jack Bratich, Art Ramirez, Lamar Reinsch, Jeanine Turner, and three anonymous reviewers for their helpful comments. All correspondence concerning this article should be addressed to Rebecca D. Heino, Georgetown University, McDonough School of Business, Washington D.C. 20057, USA [e-mail: [email protected]]. Larry Erbert was the Action Editor on this article. at MICHIGAN STATE UNIV LIBRARIES on June 9, 2010 http://spr.sagepub.com Downloaded from",
"title": ""
},
{
"docid": "2804384964bc8996e6574bdf67ed9cb5",
"text": "In the past 2 decades, correlational and experimental studies have found a positive association between violent video game play and aggression. There is less evidence, however, to support a long-term relation between these behaviors. This study examined sustained violent video game play and adolescent aggressive behavior across the high school years and directly assessed the socialization (violent video game play predicts aggression over time) versus selection hypotheses (aggression predicts violent video game play over time). Adolescents (N = 1,492, 50.8% female) were surveyed annually from Grade 9 to Grade 12 about their video game play and aggressive behaviors. Nonviolent video game play, frequency of overall video game play, and a comprehensive set of potential 3rd variables were included as covariates in each analysis. Sustained violent video game play was significantly related to steeper increases in adolescents' trajectory of aggressive behavior over time. Moreover, greater violent video game play predicted higher levels of aggression over time, after controlling for previous levels of aggression, supporting the socialization hypothesis. In contrast, no support was found for the selection hypothesis. Nonviolent video game play also did not predict higher levels of aggressive behavior over time. Our findings, and the fact that many adolescents play video games for several hours every day, underscore the need for a greater understanding of the long-term relation between violent video games and aggression, as well as the specific game characteristics (e.g., violent content, competition, pace of action) that may be responsible for this association.",
"title": ""
},
{
"docid": "5c38ad54e43b71ea5588418620bcf086",
"text": "Chondrosarcomas are indolent but invasive chondroid malignancies that can form in the skull base. Standard management of chondrosarcoma involves surgical resection and adjuvant radiation therapy. This review evaluates evidence from the literature to assess the importance of the surgical approach and extent of resection on outcomes for patients with skull base chondrosarcoma. Also evaluated is the ability of the multiple modalities of radiation therapy, such as conventional fractionated radiotherapy, proton beam, and stereotactic radiosurgery, to control tumor growth. Finally, emerging therapies for the treatment of skull-base chondrosarcoma are discussed.",
"title": ""
},
{
"docid": "a86bc0970dba249e1e53f9edbad3de43",
"text": "Periodic inspection of a hanger rope is needed for the effective maintenance of suspension bridge. However, it is dangerous for human workers to access the hanger rope and not easy to check the exact state of the hanger rope. In this work we have developed a wheel-based robot that can approach the hanger rope instead of the human worker and carry the inspection device which is able to examine the inside status of the hanger rope. Meanwhile, a wheel-based cable climbing robot may be badly affected by the vibration that is generated while the robot moves on the bumpy surface of the hanger rope. The caterpillar is able to safely drive with the wide contact face on the rough terrain. Accordingly, we developed the caterpillar that can be combined with the developed cable climbing robot. In this paper, the caterpillar is introduced and its performance is compared with the wheel-based cable climbing robot.",
"title": ""
},
{
"docid": "b5a9bbf52279ce7826434b7e5d3ccbb6",
"text": "We present our 11-layers deep, double-pathway, 3D Convolutional Neural Network, developed for the segmentation of brain lesions. The developed system segments pathology voxel-wise after processing a corresponding multi-modal 3D patch at multiple scales. We demonstrate that it is possible to train such a deep and wide 3D CNN on a small dataset of 28 cases. Our network yields promising results on the task of segmenting ischemic stroke lesions, accomplishing a mean Dice of 64% (66% after postprocessing) on the ISLES 2015 training dataset, ranking among the top entries. Regardless its size, our network is capable of processing a 3D brain volume in 3 minutes, making it applicable to the automated analysis of larger study cohorts.",
"title": ""
},
{
"docid": "653b44b98c78bed426c0e5630145c2ba",
"text": "In the field of non-monotonic logics, the notion of rational closure is acknowledged as a landmark, and we are going to see that such a construction can be characterised by means of a simple method in the context of propositional logic. We then propose an application of our approach to rational closure in the field of Description Logics, an important knowledge representation formalism, and provide a simple decision procedure for this case.",
"title": ""
},
{
"docid": "daa7773486701deab7b0c69e1205a1d9",
"text": "Age progression is defined as aesthetically re-rendering the aging face at any future age for an individual face. In this work, we aim to automatically render aging faces in a personalized way. Basically, for each age group, we learn an aging dictionary to reveal its aging characteristics (e.g., wrinkles), where the dictionary bases corresponding to the same index yet from two neighboring aging dictionaries form a particular aging pattern cross these two age groups, and a linear combination of all these patterns expresses a particular personalized aging process. Moreover, two factors are taken into consideration in the dictionary learning process. First, beyond the aging dictionaries, each person may have extra personalized facial characteristics, e.g., mole, which are invariant in the aging process. Second, it is challenging or even impossible to collect faces of all age groups for a particular person, yet much easier and more practical to get face pairs from neighboring age groups. To this end, we propose a novel Bi-level Dictionary Learning based Personalized Age Progression (BDL-PAP) method. Here, bi-level dictionary learning is formulated to learn the aging dictionaries based on face pairs from neighboring age groups. Extensive experiments well demonstrate the advantages of the proposed BDL-PAP over other state-of-the-arts in term of personalized age progression, as well as the performance gain for cross-age face verification by synthesizing aging faces.",
"title": ""
},
{
"docid": "c9766e95df62d747f5640b3cab412a3f",
"text": "For the last 10 years, interest has grown in low frequency shear waves that propagate in the human body. However, the generation of shear waves by acoustic vibrators is a relatively complex problem, and the directivity patterns of shear waves produced by the usual vibrators are more complicated than those obtained for longitudinal ultrasonic transducers. To extract shear modulus parameters from the shear wave propagation in soft tissues, it is important to understand and to optimize the directivity pattern of shear wave vibrators. This paper is devoted to a careful study of the theoretical and the experimental directivity pattern produced by a point source in soft tissues. Both theoretical and experimental measurements show that the directivity pattern of a point source vibrator presents two very strong lobes for an angle around 35/spl deg/. This paper also points out the impact of the near field in the problem of shear wave generation.",
"title": ""
}
] |
scidocsrr
|
e34fcf16ae45b3687a3d7a89d36306e4
|
WHICH TYPE OF MOTIVATION IS CAPABLE OF DRIVING ACHIEVEMENT BEHAVIORS SUCH AS EXERCISE IN DIFFERENT PERSONALITIES? BY RAJA AMJOD
|
[
{
"docid": "aa223de93696eec79feb627f899f8e8d",
"text": "The standard life events methodology for the prediction of psychological symptoms was compared with one focusing on relatively minor events, namely, the hassles and uplifts of everyday life. Hassles and Uplifts Scales were constructed and administered once a month for 10 consecutive months to a community sample of middle-aged adults. It was found that the Hassles Scale was a better predictor of concurrent and subsequent psychological symptoms than were the life events scores, and that the scale shared most of the variance in symptoms accounted for by life events. When the effects of life events scores were removed, hassles and symptoms remained significantly correlated. Uplifts were positively related to symptoms for women but not for men. Hassles and uplifts were also shown to be related, although only modestly so, to positive and negative affect, thus providing discriminate validation for hassles and uplifts in comparison to measures of emotion. It was concluded that the assessment of daily hassles and uplifts may be a better approach to the prediction of adaptational outcomes than the usual life events approach.",
"title": ""
}
] |
[
{
"docid": "88492d59d0610e69a4c6b42e40689f35",
"text": "In this paper, we describe our participation at the subtask of extraction of relationships between two identified keyphrases. This task can be very helpful in improving search engines for scientific articles. Our approach is based on the use of a convolutional neural network (CNN) trained on the training dataset. This deep learning model has already achieved successful results for the extraction relationships between named entities. Thus, our hypothesis is that this model can be also applied to extract relations between keyphrases. The official results of the task show that our architecture obtained an F1-score of 0.38% for Keyphrases Relation Classification. This performance is lower than the expected due to the generic preprocessing phase and the basic configuration of the CNN model, more complex architectures are proposed as future work to increase the classification rate.",
"title": ""
},
{
"docid": "73577e88b085e9e187328ce36116b761",
"text": "We present an extension to texture mapping that supports the representation of 3-D surface details and view motion parallax. The results are correct for viewpoints that are static or moving, far away or nearby. Our approach is very simple: a relief texture (texture extended with an orthogonal displacement per texel) is mapped onto a polygon using a two-step process: First, it is converted into an ordinary texture using a surprisingly simple 1-D forward transform. The resulting texture is then mapped onto the polygon using standard texture mapping. The 1-D warping functions work in texture coordinates to handle the parallax and visibility changes that result from the 3-D shape of the displacement surface. The subsequent texture-mapping operation handles the transformation from texture to screen coordinates.",
"title": ""
},
{
"docid": "6ddbdf3b7f8b2bc13d2c1babcabbadc6",
"text": "Improved sensors in the automotive field are leading to multi-object tracking of extended objects becoming more and more important for advanced driver assistance systems and highly automated driving. This paper proposes an approach that combines a PHD filter for extended objects, viz. objects that originate multiple measurements while also estimating the shape of the objects via constructing an object-local occupancy grid map and then extracting a polygonal chain. This allows tracking even in traffic scenarios where unambiguous segmentation of measurements is difficult or impossible. In this work, this is achieved using multiple segmentation assumptions by applying different parameter sets for the DBSCAN clustering algorithm. The proposed algorithm is evaluated using simulated data and real sensor data from a test track including highly accurate D-GPS and IMU data as a ground truth.",
"title": ""
},
{
"docid": "08dfd4bb173f7d70cff710590b988f08",
"text": "Gallium-67 citrate is currently considered as the tracer of first choice in the diagnostic workup of fever of unknown origin (FUO). Fluorine-18 2'-deoxy-2-fluoro-D-glucose (FDG) has been shown to accumulate in malignant tumours but also in inflammatory processes. The aim of this study was to prospectively evaluate FDG imaging with a double-head coincidence camera (DHCC) in patients with FUO in comparison with planar and single-photon emission tomography (SPET) 67Ga citrate scanning. Twenty FUO patients underwent FDG imaging with a DHCC which included transaxial and longitudinal whole-body tomography. In 18 of these subjects, 67Ga citrate whole-body and SPET imaging was performed. The 67Ga citrate and FDG images were interpreted by two investigators, both blinded to the results of other diagnostic modalities. Forty percent (8/20) of the patients had infection, 25% (5/20) had auto-immune diseases, 10% (2/20) had neoplasms and 15% (3/20) had other diseases. Fever remained unexplained in 10% (2/20) of the patients. Of the 20 patients studied, FDG imaging was positive and essentially contributed to the final diagnosis in 11 (55%). The sensitivity of transaxial FDG tomography in detecting the focus of fever was 84% and the specificity, 86%. Positive and negative predictive values were 92% and 75%, respectively. If the analysis was restricted to the 18 patients who were investigated both with 67Ga citrate and FDG, sensitivity was 81% and specificity, 86%. Positive and negative predictive values were 90% and 75%, respectively. The diagnostic accuracy of whole-body FDG tomography (again restricted to the aforementioned 18 patients) was lower (sensitivity, 36%; specificity, 86%; positive and negative predictive values, 80% and 46%, respectively). 67Ga citrate SPET yielded a sensitivity of 67% in detecting the focus of fever and a specificity of 78%. Positive and negative predictive values were 75% and 70%, respectively. A low sensitivity (45%), but combined with a high specificity (100%), was found in planar 67Ga imaging. Positive and negative predictive values were 100% and 54%, respectively. It is concluded that in the context of FUO, transaxial FDG tomography performed with a DHCC is superior to 67Ga citrate SPET. This seems to be the consequence of superior tracer kinetics of FDG compared with those of 67Ga citrate and of a better spatial resolution of a DHCC system compared with SPET imaging. In patients with FUO, FDG imaging with either dedicated PET or DHCC should be considered the procedure of choice.",
"title": ""
},
{
"docid": "8b0a90d4f31caffb997aced79c59e50c",
"text": "Visual SLAM systems aim to estimate the motion of a moving camera together with the geometric structure and appearance of the world being observed. To the extent that this is possible using only an image stream, the core problem that must be solved by any practical visual SLAM system is that of obtaining correspondence throughout the images captured. Modern visual SLAM pipelines commonly obtain correspondence by using sparse feature matching techniques and construct maps using a composition of point, line or other simple geometric primitives. The resulting sparse feature map representations provide sparsely furnished, incomplete reconstructions of the observed scene. Related techniques from multiple view stereo (MVS) achieve high quality dense reconstruction by obtaining dense correspondences over calibrated image sequences. Despite the usefulness of the resulting dense models, these techniques have been of limited use in visual SLAM systems. The computational complexity of estimating dense surface geometry has been a practical barrier to its use in real-time SLAM. Furthermore, MVS algorithms have typically required a fixed length, calibrated image sequence to be available throughout the optimisation — a condition fundamentally at odds with the online nature of SLAM. With the availability of massively-parallel commodity computing hardware, we demonstrate new algorithms that achieve high quality incremental dense reconstruction within online visual SLAM. The result is a live dense reconstruction (LDR) of scenes that makes possible numerous applications that can utilise online surface modelling, for instance: planning robot interactions with unknown objects, augmented reality with characters that interact with the scene, or providing enhanced data for object recognition. The core of this thesis goes beyond LDR to demonstrate fully dense visual SLAM. We replace the sparse feature map representation with an incrementally updated, non-parametric, dense surface model. By enabling real-time dense depth map estimation through novel short baseline MVS, we can continuously update the scene model and further leverage its predictive capabilities to achieve robust camera pose estimation with direct whole image alignment. We demonstrate the capabilities of dense visual SLAM using a single moving passive camera, and also when real-time surface measurements are provided by a commodity depth camera. The results demonstrate state-of-the-art, pick-up-and-play 3D reconstruction and camera tracking systems useful in many real world scenarios. Acknowledgements There are key individuals who have provided me with all the support and tools that a student who sets out on an adventure could want. Here, I wish to acknowledge those friends and colleagues, that by providing technical advice or much needed fortitude, helped bring this work to life. Prof. Andrew Davison’s robot vision lab provides a unique research experience amongst computer vision labs in the world. First and foremost, I thank my supervisor Andy for giving me the chance to be part of that experience. His brilliant guidance and support of my growth as a researcher are well matched by his enthusiasm for my work. This is made most clear by his fostering the joy of giving live demonstrations of work in progress. His complete faith in my ability drove me on and gave me license to develop new ideas and build bridges to research areas that we knew little about. Under his guidance I’ve been given every possible opportunity to develop my research interests, and this thesis would not be possible without him. My appreciation for Prof. Murray Shanahan’s insights and spirit began with our first conversation. Like ripples from a stone cast into a pond, the presence of his ideas and depth of knowledge instantly propagated through my mind. His enthusiasm and capacity to discuss any topic, old or new to him, and his ability to bring ideas together across the worlds of science and philosophy, showed me an openness to thought that I continue to try to emulate. I am grateful to Murray for securing a generous scholarship for me in the Department of Computing and for providing a home away from home in his cognitive robotics lab. I am indebted to Prof. Owen Holland who introduced me to the world of research at the University of Essex. Owen showed me a first glimpse of the breadth of ideas in robotics, AI, cognition and beyond. I thank Owen for introducing me to the idea of continuing in academia for a doctoral degree and for introducing me to Murray. I have learned much with many friends and colleagues at Imperial College, but there are three who have been instrumental. I thank Steven Lovegrove, Ankur Handa and Renato Salas-Moreno who travelled with me on countless trips into the unknown, sometimes to chase a small concept but more often than not in pursuit of the bigger picture we all wanted to see. They indulged me with months of exploration, collaboration and fun, leading to us understand ideas and techniques that were once out of reach. Together, we were able to learn much more. Thank you Hauke Strasdatt, Luis Pizarro, Jan Jachnick, Andreas Fidjeland and members of the robot vision and cognitive robotics labs for brilliant discussions and for sharing the",
"title": ""
},
{
"docid": "dde695574d7007f6f6c5fc06b2d4468a",
"text": "A model of positive psychological functioning that emerges from diverse domains of theory and philosophy is presented. Six key dimensions of wellness are defined, and empirical research summarizing their empirical translation and sociodemographic correlates is presented. Variations in well-being are explored via studies of discrete life events and enduring human experiences. Life histories of the psychologically vulnerable and resilient, defined via the cross-classification of depression and well-being, are summarized. Implications of the focus on positive functioning for research on psychotherapy, quality of life, and mind/body linkages are reviewed.",
"title": ""
},
{
"docid": "6087ad77caa9947591eb9a3f8b9b342d",
"text": "Geobacter sulfurreducens is a well-studied representative of the Geobacteraceae, which play a critical role in organic matter oxidation coupled to Fe(III) reduction, bioremediation of groundwater contaminated with organics or metals, and electricity production from waste organic matter. In order to investigate G. sulfurreducens central metabolism and electron transport, a metabolic model which integrated genome-based predictions with available genetic and physiological data was developed via the constraint-based modeling approach. Evaluation of the rates of proton production and consumption in the extracellular and cytoplasmic compartments revealed that energy conservation with extracellular electron acceptors, such as Fe(III), was limited relative to that associated with intracellular acceptors. This limitation was attributed to lack of cytoplasmic proton consumption during reduction of extracellular electron acceptors. Model-based analysis of the metabolic cost of producing an extracellular electron shuttle to promote electron transfer to insoluble Fe(III) oxides demonstrated why Geobacter species, which do not produce shuttles, have an energetic advantage over shuttle-producing Fe(III) reducers in subsurface environments. In silico analysis also revealed that the metabolic network of G. sulfurreducens could synthesize amino acids more efficiently than that of Escherichia coli due to the presence of a pyruvate-ferredoxin oxidoreductase, which catalyzes synthesis of pyruvate from acetate and carbon dioxide in a single step. In silico phenotypic analysis of deletion mutants demonstrated the capability of the model to explore the flexibility of G. sulfurreducens central metabolism and correctly predict mutant phenotypes. These results demonstrate that iterative modeling coupled with experimentation can accelerate the understanding of the physiology of poorly studied but environmentally relevant organisms and may help optimize their practical applications.",
"title": ""
},
{
"docid": "d4c1976f8796122eea98f0c3b7577a6b",
"text": "Results from a new experiment in the Philippines shed light on the effects of voter information on vote buying and incumbent advantage. The treatment provided voters with information about the existence of a major spending program and the proposed allocations and promises of mayoral candidates just prior municipal elections. It left voters more knowledgeable about candidates’ proposed policies and increased the salience of spending. Treated voters were more likely to be targeted for vote buying. We develop a model of vote buying that accounts for these results. The information we provided attenuated incumbent advantage, prompting incumbents to increase their vote buying in response. Consistent with this explanation, both knowledge and vote buying impacts were higher in incumbent-dominated municipalities. Our findings show that, in a political environment where vote buying is the currency of electoral mobilization, incumbent efforts to increase voter welfare may take the form of greater vote buying. ∗This project would not have been possible without the support and cooperation of PPCRV volunteers in Ilocos Norte and Ilocos Sur. We are grateful to Michael Davidson for excellent research assistance and to Prudenciano Gordoncillo and the UPLB team for collecting the data. We thank Marcel Fafchamps, Clement Imbert, Pablo Querubin, Simon Quinn and two anonymous reviewers for constructive comments on the pre-analysis plan. Pablo Querubin graciously shared his precinct-level data from the 2010 elections with us. We thank conference and seminar participants at Gothenburg, Copenhagen, and Oxford for comments. The project received funding from the World Bank and ethics approval from the University of Oxford Economics Department (Econ DREC Ref. No. 1213/0014). All remaining errors are ours. The opinions and conclusions expressed here are those of the authors and not those of the World Bank or the Inter-American Development Bank. †University of British Columbia; email: [email protected] ‡Inter-American Development Bank; email: [email protected] §Oxford University; email: [email protected]",
"title": ""
},
{
"docid": "494b375064fbbe012b382d0ad2db2900",
"text": "You are smart to question how different medications interact when used concurrently. Champix, called Chantix in the United States and globally by its generic name varenicline [2], is a prescription medication that can help individuals quit smoking by partially stimulating nicotine receptors in cells throughout the body. Nicorette gum, a type of nicotine replacement therapy (NRT), is also a tool to help smokers quit by providing individuals with the nicotine they crave by delivering the substance in controlled amounts through the lining of the mouth. NRT is available in many other forms including lozenges, patches, inhalers, and nasal sprays. The short answer is that there is disagreement among researchers about whether or not there are negative consequences to chewing nicotine gum while taking varenicline. While some studies suggest no harmful side effects to using them together, others have found that adverse effects from using both at the same time. So, what does the current evidence say?",
"title": ""
},
{
"docid": "e15405f1c0fb52be154e79a2976fbb6d",
"text": "The generalized Poisson regression model has been used to model dispersed count data. It is a good competitor to the negative binomial regression model when the count data is over-dispersed. Zero-inflated Poisson and zero-inflated negative binomial regression models have been proposed for the situations where the data generating process results into too many zeros. In this paper, we propose a zero-inflated generalized Poisson (ZIGP) regression model to model domestic violence data with too many zeros. Estimation of the model parameters using the method of maximum likelihood is provided. A score test is presented to test whether the number of zeros is too large for the generalized Poisson model to adequately fit the domestic violence data.",
"title": ""
},
{
"docid": "394f71d22294ec8f6704ad484a825b20",
"text": "Despite decades of research, the roles of climate and humans in driving the dramatic extinctions of large-bodied mammals during the Late Quaternary remain contentious. We use ancient DNA, species distribution models and the human fossil record to elucidate how climate and humans shaped the demographic history of woolly rhinoceros, woolly mammoth, wild horse, reindeer, bison and musk ox. We show that climate has been a major driver of population change over the past 50,000 years. However, each species responds differently to the effects of climatic shifts, habitat redistribution and human encroachment. Although climate change alone can explain the extinction of some species, such as Eurasian musk ox and woolly rhinoceros, a combination of climatic and anthropogenic effects appears to be responsible for the extinction of others, including Eurasian steppe bison and wild horse. We find no genetic signature or any distinctive range dynamics distinguishing extinct from surviving species, underscoring the challenges associated with predicting future responses of extant mammals to climate and human-mediated habitat change. Toward the end of the Late Quaternary, beginning c. 50,000 years ago, Eurasia and North America lost c. 36% and 72% of their large-bodied mammalian genera (megafauna), respectively1. The debate surrounding the potential causes of these extinctions has focused primarily on the relative roles of climate and humans2,3,4,5. In general, the proportion of species that went extinct was greatest on continents that experienced the most dramatic Correspondence and requests for materials should be addressed to E.W ([email protected]). *Joint first authors †Deceased Supplementary Information is linked to the online version of the paper at www.nature.com/nature. Author contributions E.W. initially conceived and headed the overall project. C.R. headed the species distribution modelling and range measurements. E.D.L. and J.T.S. extracted, amplified and sequenced the reindeer DNA sequences. J.B. extracted, amplified and sequenced the woolly rhinoceros DNA sequences; M.H. generated part of the woolly rhinoceros data. J.W., K-P.K., J.L. and R.K.W. generated the horse DNA sequences; A.C. generated part of the horse data. L.O., E.D.L. and B.S. analysed the genetic data, with input from R.N., K.M., M.A.S. and S.Y.W.H. Palaeoclimate simulations were provided by P.B., A.M.H, J.S.S. and P.J.V. The directly-dated spatial LAT/LON megafauna locality information was collected by E.D.L., K.A.M., D.N.-B., D.B. and A.U.; K.A.M. and D.N-B performed the species distribution modelling and range measurements. M.B. carried out the gene-climate correlation. A.U. and D.B. assembled the human Upper Palaeolithic sites from Eurasia. T.G. and K.E.G. assembled the archaeofaunal assemblages from Siberia. A.U. analysed the spatial overlap of humans and megafauna and the archaeofaunal assemblages. E.D.L., L.O., B.S., K.A.M., D.N.-B., M.K.B., A.U., T.G. and K.E.G. wrote the Supplementary Information. D.F., G.Z., T.W.S., K.A-S., G.B., J.A.B., D.L.J., P.K., T.K., X.L., L.D.M., H.G.M., D.M., M.M., E.S., M.S., R.S.S., T.S., E.S., A.T., R.W., A.C. provided the megafauna samples used for ancient DNA analysis. E.D.L. made the figures. E.D.L, L.O. and E.W. wrote the majority of the manuscript, with critical input from B.S., M.H., K.A.M., M.T.P.G., C.R., R.K.W, A.U. and the remaining authors. Mitochondrial DNA sequences have been deposited in GenBank under accession numbers JN570760-JN571033. Reprints and permissions information is available at www.nature.com/reprints. NIH Public Access Author Manuscript Nature. Author manuscript; available in PMC 2014 June 25. Published in final edited form as: Nature. ; 479(7373): 359–364. doi:10.1038/nature10574. N IH -P A A uhor M anscript N IH -P A A uhor M anscript N IH -P A A uhor M anscript climatic changes6, implying a major role of climate in species loss. However, the continental pattern of megafaunal extinctions in North America approximately coincides with the first appearance of humans, suggesting a potential anthropogenic contribution to species extinctions3,5. Demographic trajectories of different taxa vary widely and depend on the geographic scale and methodological approaches used3,5,7. For example, genetic diversity in bison8,9, musk ox10 and European cave bear11 declines gradually from c. 50–30,000 calendar years ago (ka BP). In contrast, sudden losses of genetic diversity are observed in woolly mammoth12,13 and cave lion14 long before their extinction, followed by genetic stability until the extinction events. It remains unresolved whether the Late Quaternary extinctions were a cross-taxa response to widespread climatic or anthropogenic stressors, or were a species-specific response to one or both factors15,16. Additionally, it is unclear whether distinctive genetic signatures or geographic range-size dynamics characterise extinct or surviving species— questions of particular importance to the conservation of extant species. To disentangle the processes underlying population dynamics and extinction, we investigate the demographic histories of six megafauna herbivores of the Late Quaternary: woolly rhinoceros (Coelodonta antiquitatis), woolly mammoth (Mammuthus primigenius), horse (wild Equus ferus and living domestic Equus caballus), reindeer/caribou (Rangifer tarandus), bison (Bison priscus/Bison bison) and musk ox (Ovibos moschatus). These taxa were characteristic of Late Quaternary Eurasia and/or North America and represent both extinct and extant species. Our analyses are based on 846 radiocarbon-dated mitochondrial DNA (mtDNA) control region sequences, 1,439 directly-dated megafaunal remains, and 6,291 radiocarbon determinations associated with Upper Palaeolithic human occupations in Eurasia. We reconstruct the demographic histories of the megafauna herbivores from ancient DNA data, model past species distributions and determine the geographic overlap between humans and megafauna over the last 50,000 years. We use these data to investigate how climate change and anthropogenic impacts affected species dynamics at continental and global scales, and contributed to in the extinction of some species and the survival of others. Effects of climate change differ across species and continents The direct link between climate change, population size and species extinctions is difficult to document10. However, population size is likely controlled by the amount of available habitat and is indicated by the geographic range of a species17,18. We assessed the role of climate using species distribution models, dated megafauna fossil remains and palaeoclimatic data on temperature and precipitation. We estimated species range sizes at the time periods of 42, 30, 21 and 6 ka BP as a proxy for habitat availability (Fig. 1; Supplementary Information section S1). Range size dynamics were then compared to demographic histories inferred from ancient DNA using three distinct analyses (Supplementary Information section S3): (i) coalescent-based estimation of changes in effective population size through time (Bayesian skyride19), which allows detection of changes in global genetic diversity; (ii) serial coalescent simulation followed by Approximate Bayesian Computation, which selects among different models describing continental population dynamics; and (iii) isolation-by-distance analysis, which estimates Lorenzen et al. Page 2 Nature. Author manuscript; available in PMC 2014 June 25. N IH -P A A uhor M anscript N IH -P A A uhor M anscript N IH -P A A uhor M anscript potential population structure and connectivity within continents. If climate was a major factor driving species population sizes, we would expect expansion and contraction of a species’ geographic range to mirror population increase and decline, respectively. We find a positive correlation between changes in the size of available habitat and genetic diversity for the four species—horse, reindeer, bison and musk ox—for which we have range estimates spanning all four time-points (the correlation is not statistically significant for reindeer: p = 0.101) (Fig. 2; Supplementary Information section S4). Hence, species distribution modelling based on fossil distributions and climate data are congruent with estimates of effective population size based on ancient DNA data, even in species with very different life-history traits. We conclude that climate has been a major driving force in megafauna population changes over the past 50,000 years. It is noteworthy that both estimated modelled ranges and genetic data are derived from a subset of the entire fossil record (Supplementary Information sections S1 and S3). Thus, changes in effective population size and range size may change with the addition of more data, especially from outside the geographical regions covered by the present study. However, we expect that the reported positive correlation will prevail when congruent data are compared. The best-supported models of changes in effective population size in North America and Eurasia during periods of dramatic climatic change during the past 50,000 years are those in which populations increase in size (Fig. 3, Supplementary Information section S3). This is true for all taxa except bison. However, the timing is not synchronous across populations. Specifically, we find highest support for population increase beginning c. 34 ka BP in Eurasian horse, reindeer and musk ox (Fig. 3a). Eurasian mammoth and North American horse increase prior to the Last Glacial Maximum (LGM) c. 26 ka BP. Models of population increase in woolly rhinoceros and North American mammoth fit equally well before and after the LGM, and North American reindeer populations increase later still. Only North American bison shows a population decline (Fig. 3b), the intensity of which likely swamps the signal of global population increase starting at c. 35 ka BP identified in the skyride plot",
"title": ""
},
{
"docid": "b96a571e57a3121746d841bed4af4dbe",
"text": "The Open Provenance Model is a model of provenance that is designed to meet the following requirements: (1) To allow provenance information to be exchanged between systems, by means of a compatibility layer based on a shared provenance model. (2) To allow developers to build and share tools that operate on such a provenance model. (3) To define provenance in a precise, technology-agnostic manner. (4) To support a digital representation of provenance for any “thing”, whether produced by computer systems or not. (5) To allow multiple levels of description to coexist. (6) To define a core set of rules that identify the valid inferences that can be made on provenance representation. This document contains the specification of the Open Provenance Model (v1.1) resulting from a community effort to achieve inter-operability in the Provenance Challenge series.",
"title": ""
},
{
"docid": "a981db3aa149caec10b1824c82840782",
"text": "It has been suggested that the performance of a team is determined by the team members’ roles. An analysis of the performance of 342 individuals organised into 33 teams indicates that team roles characterised by creativity, co-ordination and cooperation are positively correlated with team performance. Members of developed teams exhibit certain performance enhancing characteristics and behaviours. Amongst the more developed teams there is a positive relationship between Specialist Role characteristics and team performance. While the characteristics associated with the Coordinator Role are also positively correlated with performance, these can impede the performance of less developed teams.",
"title": ""
},
{
"docid": "42fd940e239ed3748b007fde8b583b25",
"text": "The ImageCLEF’s plant identification task provides a testbed for the system-oriented evaluation of plant identification, more precisely on the 126 tree species identification based on leaf images. Three types of image content are considered: Scan, Scan-like (leaf photographs with a white uniform background), and Photograph (unconstrained leaf with natural background). The main originality of this data is that it was specifically built through a citizen sciences initiative conducted by Tela Botanica, a French social network of amateur and expert botanists. This makes the task closer to the conditions of a real-world application. This overview presents more precisely the resources and assessments of task, summarizes the retrieval approaches employed by the participating groups, and provides an analysis of the main evaluation results. With a total of eleven groups from eight countries and with a total of 30 runs submitted, involving distinct and original methods, this second year pilot task confirms Image Retrieval community interest for biodiversity and botany, and highlights further challenging studies in plant identification.",
"title": ""
},
{
"docid": "f0f6125a0d718789715c3760db18161e",
"text": "Detecting fluid emissions (e.g. urination or leaks) that extend beyond containment systems (e.g. diapers or adult pads) is a cause of concern for users and developers of wearable fluid containment products. Immediate, automated detection would allow users to address the situation quickly, preventing medical conditions such as adverse skin effects and avoiding embarrassment. For product development, fluid emission detection systems would enable more accurate and efficient lab and field evaluation of absorbent products. This paper describes the development of a textile-based fluid-detection sensing method that uses a multi-layer \"keypad matrix\" sensing paradigm using stitched conductive threads. Bench characterization tests determined the effects of sensor spacing, spacer fabric property, and contact pressures on wetness detection for a 5mL minimum benchmark fluid volume. The sensing method and bench-determined requirements were then applied in a close-fitting torso garment for babies that fastens at the crotch (onesie) that is able to detect diaper leakage events. Mannequin testing of the resulting garment confirmed the ability of using wetness sensing timing to infer location of induced 5 mL leaks.",
"title": ""
},
{
"docid": "fc5a04c795fbfdd2b6b53836c9710e4d",
"text": "In the last few years, deep convolutional neural networks have become ubiquitous in computer vision, achieving state-of-the-art results on problems like object detection, semantic segmentation, and image captioning. However, they have not yet been widely investigated in the document analysis community. In this paper, we present a word spotting system based on convolutional neural networks. We train a network to extract a powerful image representation, which we then embed into a word embedding space. This allows us to perform word spotting using both query-by-string and query-by-example in a variety of word embedding spaces, both learned and handcrafted, for verbatim as well as semantic word spotting. Our novel approach is versatile and the evaluation shows that it outperforms the previous state-of-the-art for word spotting on standard datasets.",
"title": ""
},
{
"docid": "046f6c5cc6065c1cb219095fb0dfc06f",
"text": "In this paper, we describe COLABA, a large effort to create resources and processing tools for Dialectal Arabic Blogs. We describe the objectives of the project, the process flow and the interaction between the different components. We briefly describe the manual annotation effort and the resources created. Finally, we sketch how these resources and tools are put together to create DIRA, a termexpansion tool for information retrieval over dialectal Arabic collections using Modern Standard Arabic queries.",
"title": ""
},
{
"docid": "575da85b3675ceaec26143981dbe9b53",
"text": "People are increasingly required to disclose personal information to computerand Internetbased systems in order to register, identify themselves or simply for the system to work as designed. In the present paper, we outline two different methods to easily measure people’s behavioral self-disclosure to web-based forms. The first, the use of an ‘I prefer not to say’ option to sensitive questions is shown to be responsive to the manipulation of level of privacy concern by increasing the salience of privacy issues, and to experimental manipulations of privacy. The second, blurring or increased ambiguity was used primarily by males in response to an income question in a high privacy condition. Implications for the study of self-disclosure in human–computer interaction and web-based research are discussed. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "58c2f9f5f043f87bc51d043f70565710",
"text": "T strategic use of first-party content by two-sided platforms is driven by two key factors: the nature of buyer and seller expectations (favorable versus unfavorable) and the nature of the relationship between first-party content and third-party content (complements or substitutes). Platforms facing unfavorable expectations face an additional constraint: their prices and first-party content investment need to be such that low (zero) participation equilibria are eliminated. This additional constraint typically leads them to invest more (less) in first-party content relative to platforms facing favorable expectations when firstand third-party content are substitutes (complements). These results hold with both simultaneous and sequential entry of the two sides. With two competing platforms—incumbent facing favorable expectations and entrant facing unfavorable expectations— and multi-homing on one side of the market, the incumbent always invests (weakly) more in first-party content relative to the case in which it is a monopolist.",
"title": ""
},
{
"docid": "9b47d3883d85c0fc61b3b033bdc8aee9",
"text": "Prediction and control of the dynamics of complex networks is a central problem in network science. Structural and dynamical similarities of different real networks suggest that some universal laws might accurately describe the dynamics of these networks, albeit the nature and common origin of such laws remain elusive. Here we show that the causal network representing the large-scale structure of spacetime in our accelerating universe is a power-law graph with strong clustering, similar to many complex networks such as the Internet, social, or biological networks. We prove that this structural similarity is a consequence of the asymptotic equivalence between the large-scale growth dynamics of complex networks and causal networks. This equivalence suggests that unexpectedly similar laws govern the dynamics of complex networks and spacetime in the universe, with implications to network science and cosmology.",
"title": ""
}
] |
scidocsrr
|
f59f315f9c0279ab1456d3ae59527e07
|
Multiobjective Combinatorial Optimization by Using Decomposition and Ant Colony
|
[
{
"docid": "3824a61e476fa359a104d03f7a99262c",
"text": "We describe an artificial ant colony capable of solving the travelling salesman problem (TSP). Ants of the artificial colony are able to generate successively shorter feasible tours by using information accumulated in the form of a pheromone trail deposited on the edges of the TSP graph. Computer simulations demonstrate that the artificial ant colony is capable of generating good solutions to both symmetric and asymmetric instances of the TSP. The method is an example, like simulated annealing, neural networks and evolutionary computation, of the successful use of a natural metaphor to design an optimization algorithm.",
"title": ""
}
] |
[
{
"docid": "cd1274c785a410f0e38b8e033555ee9b",
"text": "This paper presents a graph signal denoising method with the trilateral filter defined in the graph spectral domain. The original trilateral filter (TF) is a data-dependent filter that is widely used as an edge-preserving smoothing method for image processing. However, because of the data-dependency, one cannot provide its frequency domain representation. To overcome this problem, we establish the graph spectral domain representation of the data-dependent filter, i.e., a spectral graph TF (SGTF). This representation enables us to design an effective graph signal denoising filter with a Tikhonov regularization. Moreover, for the proposed graph denoising filter, we provide a parameter optimization technique to search for a regularization parameter that approximately minimizes the mean squared error w.r.t. the unknown graph signal of interest. Comprehensive experimental results validate our graph signal processing-based approach for images and graph signals.",
"title": ""
},
{
"docid": "340a2fd43f494bb1eba58629802a738c",
"text": "A new image decomposition scheme, called the adaptive directional total variation (ADTV) model, is proposed to achieve effective segmentation and enhancement for latent fingerprint images in this work. The proposed model is inspired by the classical total variation models, but it differentiates itself by integrating two unique features of fingerprints; namely, scale and orientation. The proposed ADTV model decomposes a latent fingerprint image into two layers: cartoon and texture. The cartoon layer contains unwanted components (e.g., structured noise) while the texture layer mainly consists of the latent fingerprint. This cartoon-texture decomposition facilitates the process of segmentation, as the region of interest can be easily detected from the texture layer using traditional segmentation methods. The effectiveness of the proposed scheme is validated through experimental results on the entire NIST SD27 latent fingerprint database. The proposed scheme achieves accurate segmentation and enhancement results, leading to improved feature detection and latent matching performance.",
"title": ""
},
{
"docid": "13c6e4fc3a20528383ef7625c9dd2b79",
"text": "Seasonal affective disorder (SAD) is a syndrome characterized by recurrent depressions that occur annually at the same time each year. We describe 29 patients with SAD; most of them had a bipolar affective disorder, especially bipolar II, and their depressions were generally characterized by hypersomnia, overeating, and carbohydrate craving and seemed to respond to changes in climate and latitude. Sleep recordings in nine depressed patients confirmed the presence of hypersomnia and showed increased sleep latency and reduced slow-wave (delta) sleep. Preliminary studies in 11 patients suggest that extending the photoperiod with bright artificial light has an antidepressant effect.",
"title": ""
},
{
"docid": "a1ffd254e355cf312bf269ec3751200d",
"text": "Existing RGB-D object recognition methods either use channel specific handcrafted features, or learn features with deep networks. The former lack representation ability while the latter require large amounts of training data and learning time. In real-time robotics applications involving RGB-D sensors, we do not have the luxury of both. In this paper, we propose Localized Deep Extreme Learning Machines (LDELM) that efficiently learn features from RGB-D data. By using localized patches, not only is the problem of data sparsity solved, but the learned features are robust to occlusions and viewpoint variations. LDELM learns deep localized features in an unsupervised way from random patches of the training data. Each image is then feed-forwarded, patch-wise, through the LDELM to form a cuboid of features. The cuboid is divided into cells and pooled to get the final compact image representation which is then used to train an ELM classifier. Experiments on the benchmark Washington RGB-D and 2D3D datasets show that the proposed algorithm not only is significantly faster to train but also outperforms state-of-the-art methods in terms of accuracy and classification time.",
"title": ""
},
{
"docid": "251a47eb1a5307c5eba7372ce09ea641",
"text": "A new class of target link flooding attacks (LFA) can cut off the Internet connections of a target area without being detected because they employ legitimate flows to congest selected links. Although new mechanisms for defending against LFA have been proposed, the deployment issues limit their usages since they require modifying routers. In this paper, we propose LinkScope, a novel system that employs both the end-to-end and the hopby-hop network measurement techniques to capture abnormal path performance degradation for detecting LFA and then correlate the performance data and traceroute data to infer the target links or areas. Although the idea is simple, we tackle a number of challenging issues, such as conducting large-scale Internet measurement through noncooperative measurement, assessing the performance on asymmetric Internet paths, and detecting LFA. We have implemented LinkScope with 7174 lines of C codes and the extensive evaluation in a testbed and the Internet show that LinkScope can quickly detect LFA with high accuracy and low false positive rate.",
"title": ""
},
{
"docid": "0850f46a4bcbe1898a6a2dca9f61ea61",
"text": "Public opinion polarization is here conceived as a process of alignment along multiple lines of potential disagreement and measured as growing constraint in individuals' preferences. Using NES data from 1972 to 2004, the authors model trends in issue partisanship-the correlation of issue attitudes with party identification-and issue alignment-the correlation between pairs of issues-and find a substantive increase in issue partisanship, but little evidence of issue alignment. The findings suggest that opinion changes correspond more to a resorting of party labels among voters than to greater constraint on issue attitudes: since parties are more polarized, they are now better at sorting individuals along ideological lines. Levels of constraint vary across population subgroups: strong partisans and wealthier and politically sophisticated voters have grown more coherent in their beliefs. The authors discuss the consequences of partisan realignment and group sorting on the political process and potential deviations from the classic pluralistic account of American politics.",
"title": ""
},
{
"docid": "3f21c1bb9302d29bc2c816aaabf2e613",
"text": "BACKGROUND\nPlasma brain natriuretic peptide (BNP) level increases in proportion to the degree of right ventricular dysfunction in pulmonary hypertension. We sought to assess the prognostic significance of plasma BNP in patients with primary pulmonary hypertension (PPH).\n\n\nMETHODS AND RESULTS\nPlasma BNP was measured in 60 patients with PPH at diagnostic catheterization, together with atrial natriuretic peptide, norepinephrine, and epinephrine. Measurements were repeated in 53 patients after a mean follow-up period of 3 months. Forty-nine of the patients received intravenous or oral prostacyclin. During a mean follow-up period of 24 months, 18 patients died of cardiopulmonary causes. According to multivariate analysis, baseline plasma BNP was an independent predictor of mortality. Patients with a supramedian level of baseline BNP (>/=150 pg/mL) had a significantly lower survival rate than those with an inframedian level, according to Kaplan-Meier survival curves (P<0.05). Plasma BNP in survivors decreased significantly during the follow-up (217+/-38 to 149+/-30 pg/mL, P<0. 05), whereas that in nonsurvivors increased (365+/-77 to 544+/-68 pg/mL, P<0.05). Thus, survival was strikingly worse for patients with a supramedian value of follow-up BNP (>/=180 pg/mL) than for those with an inframedian value (P<0.0001).\n\n\nCONCLUSIONS\nA high level of plasma BNP, and in particular, a further increase in plasma BNP during follow-up, may have a strong, independent association with increased mortality rates in patients with PPH.",
"title": ""
},
{
"docid": "168a959b617dc58e6355c1b0ab46c3fc",
"text": "Detection of true human emotions has attracted a lot of interest in the recent years. The applications range from e-retail to health-care for developing effective companion systems with reliable emotion recognition. This paper proposes heart rate variability (HRV) features extracted from photoplethysmogram (PPG) signal obtained from a cost-effective PPG device such as Pulse Oximeter for detecting and recognizing the emotions on the basis of the physiological signals. The HRV features obtained from both time and frequency domain are used as features for classification of emotions. These features are extracted from the entire PPG signal obtained during emotion elicitation and baseline neutral phase. For analyzing emotion recognition, using the proposed HRV features, standard video stimuli are used. We have considered three emotions namely, happy, sad and neutral or null emotions. Support vector machines are used for developing the models and features are explored to achieve average emotion recognition of 83.8% for the above model and listed features.",
"title": ""
},
{
"docid": "f7ce012a5943be5137df7d414e9de75a",
"text": "As multi-core processors proliferate, it has become more important than ever to ensure efficient execution of parallel jobs on multiprocessor systems. In this paper, we study the problem of scheduling parallel jobs with arbitrary release time on multiprocessors while minimizing the jobs’ mean response time. We focus on non-clairvoyant scheduling schemes that adaptively reallocate processors based on periodic feedbacks from the individual jobs. Since it is known that no deterministic non-clairvoyant algorithm is competitive for this problem,we focus on resource augmentation analysis, and show that two adaptive algorithms, Agdeq and Abgdeq, achieve competitive performance using O(1) times faster processors than the adversary. These results are obtained through a general framework for analyzing the mean response time of any two-level adaptive scheduler. Our simulation results verify the effectiveness of Agdeq and Abgdeq by evaluating their performances over a wide range of workloads consisting of synthetic parallel jobs with different parallelism characteristics.",
"title": ""
},
{
"docid": "a7959808cb41963e8d204c3078106842",
"text": "Human alteration of the global environment has triggered the sixth major extinction event in the history of life and caused widespread changes in the global distribution of organisms. These changes in biodiversity alter ecosystem processes and change the resilience of ecosystems to environmental change. This has profound consequences for services that humans derive from ecosystems. The large ecological and societal consequences of changing biodiversity should be minimized to preserve options for future solutions to global environmental problems.",
"title": ""
},
{
"docid": "c13247847d60a5ebd19822140403a238",
"text": "Parallelizing existing sequential programs to run efficiently on multicores is hard. The Java 5 package java.util.concurrent (j.u.c.) supports writing concurrent programs: much of the complexity of writing thread-safe and scalable programs is hidden in the library. To use this package, programmers still need to reengineer existing code. This is tedious because it requires changing many lines of code, is error-prone because programmers can use the wrong APIs, and is omission-prone because programmers can miss opportunities to use the enhanced APIs. This paper presents our tool, Concurrencer, that enables programmers to refactor sequential code into parallel code that uses three j.u.c. concurrent utilities. Concurrencer does not require any program annotations. Its transformations span multiple, non-adjacent, program statements. A find-and-replace tool can not perform such transformations, which require program analysis. Empirical evaluation shows that Concurrencer refactors code effectively: Concurrencer correctly identifies and applies transformations that some open-source developers overlooked, and the converted code exhibits good speedup.",
"title": ""
},
{
"docid": "19d79b136a9af42ac610131217de8c08",
"text": "The aim of the experimental study described in this article is to investigate the effect of a lifelike character with subtle expressivity on the affective state of users. The character acts as a quizmaster in the context of a mathematical game. This application was chosen as a simple, and for the sake of the experiment, highly controllable, instance of human–computer interfaces and software. Subtle expressivity refers to the character’s affective response to the user’s performance by emulating multimodal human–human communicative behavior such as different body gestures and varying linguistic style. The impact of em-pathic behavior, which is a special form of affective response, is examined by deliberately frustrating the user during the game progress. There are two novel aspects in this investigation. First, we employ an animated interface agent to address the affective state of users rather than a text-based interface, which has been used in related research. Second, while previous empirical studies rely on questionnaires to evaluate the effect of life-like characters, we utilize physiological information of users (in addition to questionnaire data) in order to precisely associate the occurrence of interface events with users’ autonomic nervous system activity. The results of our study indicate that empathic character response can significantly decrease user stress see front matter r 2004 Elsevier Ltd. All rights reserved. .ijhcs.2004.11.009 cle is a significantly revised and extended version of Prendinger et al. (2003). nding author. Tel.: +813 4212 2650; fax: +81 3 3556 1916. dresses: [email protected] (H. Prendinger), [email protected] (J. Mori), v.t.u-tokyo.ac.jp (M. Ishizuka).",
"title": ""
},
{
"docid": "4b3592efd8a4f6f6c9361a6f66a30a5f",
"text": "Error correction codes provides a mean to detect and correct errors introduced by the transmission channel. This paper presents a high-speed parallel cyclic redundancy check (CRC) implementation based on unfolding, pipelining, and retiming algorithms. CRC architectures are first pipelined to reduce the iteration bound by using novel look-ahead pipelining methods and then unfolded and retimed to design high-speed parallel circuits. The study and implementation using Verilog HDL. Modelsim Xilinx Edition (MXE) will be used for simulation and functional verification. Xilinx ISE will be used for synthesis and bit file generation. The Xilinx Chip scope will be used to test the results on Spartan 3E",
"title": ""
},
{
"docid": "73b4cceb1546a94260c75ae8bed8edd8",
"text": "We address the problem of distance metric learning (DML), defined as learning a distance consistent with a notion of semantic similarity. Traditionally, for this problem supervision is expressed in the form of sets of points that follow an ordinal relationship – an anchor point x is similar to a set of positive points Y , and dissimilar to a set of negative points Z, and a loss defined over these distances is minimized. While the specifics of the optimization differ, in this work we collectively call this type of supervision Triplets and all methods that follow this pattern Triplet-Based methods. These methods are challenging to optimize. A main issue is the need for finding informative triplets, which is usually achieved by a variety of tricks such as increasing the batch size, hard or semi-hard triplet mining, etc. Even with these tricks, the convergence rate of such methods is slow. In this paper we propose to optimize the triplet loss on a different space of triplets, consisting of an anchor data point and similar and dissimilar proxy points which are learned as well. These proxies approximate the original data points, so that a triplet loss over the proxies is a tight upper bound of the original loss. This proxy-based loss is empirically better behaved. As a result, the proxy-loss improves on state-of-art results for three standard zero-shot learning datasets, by up to 15% points, while converging three times as fast as other triplet-based losses.",
"title": ""
},
{
"docid": "0cc665089be9aa8217baac32f0385f41",
"text": "Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary process — incorporating these networks into mission critical processes such as medical diagnosis, planning and control — requires a level of trust association with the machine output. Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide human-understandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks. Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the low-level network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability.",
"title": ""
},
{
"docid": "d473619f76f81eced041df5bc012c246",
"text": "Monocular visual odometry (VO) and simultaneous localization and mapping (SLAM) have seen tremendous improvements in accuracy, robustness, and efficiency, and have gained increasing popularity over recent years. Nevertheless, not so many discussions have been carried out to reveal the influences of three very influential yet easily overlooked aspects, such as photometric calibration, motion bias, and rolling shutter effect. In this work, we evaluate these three aspects quantitatively on the state of the art of direct, feature-based, and semi-direct methods, providing the community with useful practical knowledge both for better applying existing methods and developing new algorithms of VO and SLAM. Conclusions (some of which are counterintuitive) are drawn with both technical and empirical analyses to all of our experiments. Possible improvements on existing methods are directed or proposed, such as a subpixel accuracy refinement of oriented fast and rotated brief (ORB)-SLAM, which boosts its performance.",
"title": ""
},
{
"docid": "f249a6089a789e52eeadc8ae16213bc1",
"text": "We have collected a new face data set that will facilitate research in the problem of frontal to profile face verification `in the wild'. The aim of this data set is to isolate the factor of pose variation in terms of extreme poses like profile, where many features are occluded, along with other `in the wild' variations. We call this data set the Celebrities in Frontal-Profile (CFP) data set. We find that human performance on Frontal-Profile verification in this data set is only slightly worse (94.57% accuracy) than that on Frontal-Frontal verification (96.24% accuracy). However we evaluated many state-of-the-art algorithms, including Fisher Vector, Sub-SML and a Deep learning algorithm. We observe that all of them degrade more than 10% from Frontal-Frontal to Frontal-Profile verification. The Deep learning implementation, which performs comparable to humans on Frontal-Frontal, performs significantly worse (84.91% accuracy) on Frontal-Profile. This suggests that there is a gap between human performance and automatic face recognition methods for large pose variation in unconstrained images.",
"title": ""
},
{
"docid": "9c52333616cf2b1dce267333f4fad2ba",
"text": "We present a new type of actuatable display, called Tilt Displays, that provide visual feedback combined with multi-axis tilting and vertical actuation. Their ability to physically mutate provides users with an additional information channel that facilitates a range of new applications including collaboration and tangible entertainment while enhancing familiar applications such as terrain modelling by allowing 3D scenes to be rendered in a physical-3D manner. Through a mobile 3x3 custom built prototype, we examine the design space around Tilt Displays, categorise output modalities and conduct two user studies. The first, an exploratory study examines users' initial impressions of Tilt Displays and probes potential interactions and uses. The second takes a quantitative approach to understand interaction possibilities with such displays, resulting in the production of two user-defined gesture sets: one for manipulating the surface of the Tilt Display, the second for conducting everyday interactions.",
"title": ""
},
{
"docid": "6a2e3c783b468474ca0f67d7c5af456c",
"text": "We evaluated the cytotoxic effects of four prostaglandin analogs (PGAs) used to treat glaucoma. First we established primary cultures of conjunctival stromal cells from healthy donors. Then cell cultures were incubated with different concentrations (0, 0.1, 1, 5, 25, 50 and 100%) of commercial formulations of bimatoprost, tafluprost, travoprost and latanoprost for increasing periods (5 and 30 min, 1 h, 6 h and 24 h) and cell survival was assessed with three different methods: WST-1, MTT and calcein/AM-ethidium homodimer-1 assays. Our results showed that all PGAs were associated with a certain level of cell damage, which correlated significantly with the concentration of PGA used, and to a lesser extent with culture time. Tafluprost tended to be less toxic than bimatoprost, travoprost and latanoprost after all culture periods. The results for WST-1, MTT and calcein/AM-ethidium homodimer-1 correlated closely. When the average lethal dose 50 was calculated, we found that the most cytotoxic drug was latanoprost, whereas tafluprost was the most sparing of the ocular surface in vitro. These results indicate the need to design novel PGAs with high effectiveness but free from the cytotoxic effects that we found, or at least to obtain drugs that are functional at low dosages. The fact that the commercial formulation of tafluprost used in this work was preservative-free may support the current tendency to eliminate preservatives from eye drops for clinical use.",
"title": ""
},
{
"docid": "afe2bd0d8c8ad5495eb4907bf7ffa28d",
"text": "Shannnon entropy is an efficient tool to measure uncertain information. However, it cannot handle the more uncertain situation when the uncertainty is represented by basic probability assignment (BPA), instead of probability distribution, under the framework of Dempster Shafer evidence theory. To address this issue, a new entropy, named as Deng entropy, is proposed. The proposed Deng entropy is the generalization of Shannnon entropy. If uncertain information is represented by probability distribution, the uncertain degree measured by Deng entropy is the same as that of Shannnon’s entropy. Some numerical examples are illustrated to shown the efficiency of Deng entropy.",
"title": ""
}
] |
scidocsrr
|
c916cb0706485d34dbd445027e7ab2c2
|
Heuristic Feature Selection for Clickbait Detection
|
[
{
"docid": "40da1f85f7bdc84537a608ce6bec0e17",
"text": "This paper reports on the PAN 2014 evaluation lab which hosts three shared tasks on plagiarism detection, author identification, and author profiling. To improve the reproducibility of shared tasks in general, and PAN’s tasks in particular, the Webis group developed a new web service called TIRA, which facilitates software submissions. Unlike many other labs, PAN asks participants to submit running softwares instead of their run output. To deal with the organizational overhead involved in handling software submissions, the TIRA experimentation platform helps to significantly reduce the workload for both participants and organizers, whereas the submitted softwares are kept in a running state. This year, we addressed the matter of responsibility of successful execution of submitted softwares in order to put participants back in charge of executing their software at our site. In sum, 57 softwares have been submitted to our lab; together with the 58 software submissions of last year, this forms the largest collection of softwares for our three tasks to date, all of which are readily available for further analysis. The report concludes with a brief summary of each task.",
"title": ""
},
{
"docid": "3a7c0ab68349e502d3803e7dd77bd69d",
"text": "Clickbait has become a nuisance on social media. To address the urging task of clickbait detection, we constructed a new corpus of 38,517 annotated Twitter tweets, the Webis Clickbait Corpus 2017. To avoid biases in terms of publisher and topic, tweets were sampled from the top 27 most retweeted news publishers, covering a period of 150 days. Each tweet has been annotated on 4-point scale by five annotators recruited at Amazon’s Mechanical Turk. The corpus has been employed to evaluate 12 clickbait detectors submitted to the Clickbait Challenge 2017. Download: https://webis.de/data/webis-clickbait-17.html Challenge: https://clickbait-challenge.org",
"title": ""
}
] |
[
{
"docid": "ba2e16103676fa57bc3ca841106d2d32",
"text": "The purpose of this study was to investigate the effect of the ultrasonic cavitation versus low level laser therapy in the treatment of abdominal adiposity in female post gastric bypass. Subjects: Sixty female suffering from localized fat deposits at the abdomen area after gastric bypass were divided randomly and equally into three equal groups Group (1): were received low level laser therapy plus bicycle exercises and abdominal exercises for 3 months, Group (2): were received ultrasonic cavitation therapy plus bicycle exercises and abdominal exercises for 3 months, and Group (3): were received bicycle exercises and abdominal exercises for 3 months. Methods: data were obtained for each patient from waist circumferences, skin fold and ultrasonography measurements were done after six weeks postoperative (preexercise) and at three months postoperative. The physical therapy program began, six weeks postoperative for experimental group. Including aerobic exercises performed on the stationary bicycle, for 30 min, 3 sessions per week for three months Results: showed a statistically significant decrease in waist circumferences, skin fold and ultrasonography measurements in the three groups, with a higher rate of reduction in Group (1) and Group (2) .Also there was a non-significant difference between Group (1) and Group (2). Conclusion: these results suggested that bothlow level laser therapy and ultrasonic cavitation had a significant effect on abdominal adiposity after gastric bypass in female.",
"title": ""
},
{
"docid": "548d87ac6f8a023d9f65af371ad9314c",
"text": "Mindfiilness meditation is an increasingly popular intervention for the treatment of physical illnesses and psychological difficulties. Using intervention strategies with mechanisms familiar to cognitive behavioral therapists, the principles and practice of mindfijlness meditation offer promise for promoting many of the most basic elements of positive psychology. It is proposed that mindfulness meditation promotes positive adjustment by strengthening metacognitive skills and by changing schemas related to emotion, health, and illness. Additionally, the benefits of yoga as a mindfulness practice are explored. Even though much empirical work is needed to determine the parameters of mindfulness meditation's benefits, and the mechanisms by which it may achieve these benefits, theory and data thus far clearly suggest the promise of mindfulness as a link between positive psychology and cognitive behavioral therapies.",
"title": ""
},
{
"docid": "70c8caf1bdbdaf29072903e20c432854",
"text": "We show that the topological modular functor from Witten–Chern–Simons theory is universal for quantum computation in the sense that a quantum circuit computation can be efficiently approximated by an intertwining action of a braid on the functor’s state space. A computational model based on Chern–Simons theory at a fifth root of unity is defined and shown to be polynomially equivalent to the quantum circuit model. The chief technical advance: the density of the irreducible sectors of the Jones representation has topological implications which will be considered elsewhere.",
"title": ""
},
{
"docid": "6737955fd1876a40fc0e662a4cac0711",
"text": "Cloud computing is a novel perspective for large scale distributed computing and parallel processing. It provides computing as a utility service on a pay per use basis. The performance and efficiency of cloud computing services always depends upon the performance of the user tasks submitted to the cloud system. Scheduling of the user tasks plays significant role in improving performance of the cloud services. Task scheduling is one of the main types of scheduling performed. This paper presents a detailed study of various task scheduling methods existing for the cloud environment. A brief analysis of various scheduling parameters considered in these methods is also discussed in this paper.",
"title": ""
},
{
"docid": "34b3c5ee3ea466c23f5c7662f5ce5b33",
"text": "A hstruct -The concept of a super value node is developed to estend the theor? of influence diagrams to allow dynamic programming to be performed within this graphical modeling framework. The operations necessa? to exploit the presence of these nodes and efficiently analyze the models are developed. The key result is that by reprewnting value function separability in the structure of the graph of the influence diagram. formulation is simplified and operations on the model can take advantage of the wparability. Froni the decision analysis perspective. this allows simple exploitation of separabilih in the value function of a decision problem which can significantly reduce memory and computation requirements. Importantly. this allows algorithms to be designed to solve influence diagrams that automatically recognize the opportunih for applying dynamic programming. From the decision processes perspective, influence diagrams with super value nodes allow efficient formulation and solution of nonstandard decision process structures. They a h allow the exploitation of conditional independence between state variables. Examples are provided that demonstrate these advantages.",
"title": ""
},
{
"docid": "dc83550afd690e371283428647ed806e",
"text": "Recently, convolutional neural networks have demonstrated excellent performance on various visual tasks, including the classification of common two-dimensional images. In this paper, deep convolutional neural networks are employed to classify hyperspectral images directly in spectral domain. More specifically, the architecture of the proposed classifier contains five layers with weights which are the input layer, the convolutional layer, the max pooling layer, the full connection layer, and the output layer. These five layers are implemented on each spectral signature to discriminate against others. Experimental results based on several hyperspectral image data sets demonstrate that the proposed method can achieve better classification performance than some traditional methods, such as support vector machines and the conventional deep learning-based methods.",
"title": ""
},
{
"docid": "43975c43de57d889b038cdee8b35e786",
"text": "We present an algorithm for computing rigorous solutions to a large class of ordinary differential equations. The main algorithm is based on a partitioning process and the use of interval arithmetic with directed rounding. As an application, we prove that the Lorenz equations support a strange attractor, as conjectured by Edward Lorenz in 1963. This conjecture was recently listed by Steven Smale as one of several challenging problems for the twenty-first century. We also prove that the attractor is robust, i.e., it persists under small perturbations of the coefficients in the underlying differential equations. Furthermore, the flow of the equations admits a unique SRB measure, whose support coincides with the attractor. The proof is based on a combination of normal form theory and rigorous computations.",
"title": ""
},
{
"docid": "d1b20385d90fe1e98a07f9cf55af6adb",
"text": "Cerebellar cognitive affective syndrome (CCAS; Schmahmann's syndrome) is characterized by deficits in executive function, linguistic processing, spatial cognition, and affect regulation. Diagnosis currently relies on detailed neuropsychological testing. The aim of this study was to develop an office or bedside cognitive screen to help identify CCAS in cerebellar patients. Secondary objectives were to evaluate whether available brief tests of mental function detect cognitive impairment in cerebellar patients, whether cognitive performance is different in patients with isolated cerebellar lesions versus complex cerebrocerebellar pathology, and whether there are cognitive deficits that should raise red flags about extra-cerebellar pathology. Comprehensive standard neuropsychological tests, experimental measures and clinical rating scales were administered to 77 patients with cerebellar disease-36 isolated cerebellar degeneration or injury, and 41 complex cerebrocerebellar pathology-and to healthy matched controls. Tests that differentiated patients from controls were used to develop a screening instrument that includes the cardinal elements of CCAS. We validated this new scale in a new cohort of 39 cerebellar patients and 55 healthy controls. We confirm the defining features of CCAS using neuropsychological measures. Deficits in executive function were most pronounced for working memory, mental flexibility, and abstract reasoning. Language deficits included verb for noun generation and phonemic > semantic fluency. Visual spatial function was degraded in performance and interpretation of visual stimuli. Neuropsychiatric features included impairments in attentional control, emotional control, psychosis spectrum disorders and social skill set. From these results, we derived a 10-item scale providing total raw score, cut-offs for each test, and pass/fail criteria that determined 'possible' (one test failed), 'probable' (two tests failed), and 'definite' CCAS (three tests failed). When applied to the exploratory cohort, and administered to the validation cohort, the CCAS/Schmahmann scale identified sensitivity and selectivity, respectively as possible exploratory cohort: 85%/74%, validation cohort: 95%/78%; probable exploratory cohort: 58%/94%, validation cohort: 82%/93%; and definite exploratory cohort: 48%/100%, validation cohort: 46%/100%. In patients in the exploratory cohort, Mini-Mental State Examination and Montreal Cognitive Assessment scores were within normal range. Complex cerebrocerebellar disease patients were impaired on similarities in comparison to isolated cerebellar disease. Inability to recall words from multiple choice occurred only in patients with extra-cerebellar disease. The CCAS/Schmahmann syndrome scale is useful for expedited clinical assessment of CCAS in patients with cerebellar disorders.awx317media15678692096001.",
"title": ""
},
{
"docid": "0674479836883d572b05af6481f27a0d",
"text": "Contents Preface vii Chapter 1. Graph Theory in the Information Age 1 1.1. Introduction 1 1.2. Basic definitions 3 1.3. Degree sequences and the power law 6 1.4. History of the power law 8 1.5. Examples of power law graphs 10 1.6. An outline of the book 17 Chapter 2. Old and New Concentration Inequalities 21 2.1. The binomial distribution and its asymptotic behavior 21 2.2. General Chernoff inequalities 25 2.3. More concentration inequalities 30 2.4. A concentration inequality with a large error estimate 33 2.5. Martingales and Azuma's inequality 35 2.6. General martingale inequalities 38 2.7. Supermartingales and Submartingales 41 2.8. The decision tree and relaxed concentration inequalities 46 Chapter 3. A Generative Model — the Preferential Attachment Scheme 55 3.1. Basic steps of the preferential attachment scheme 55 3.2. Analyzing the preferential attachment model 56 3.3. A useful lemma for rigorous proofs 59 3.4. The peril of heuristics via an example of balls-and-bins 60 3.5. Scale-free networks 62 3.6. The sharp concentration of preferential attachment scheme 64 3.7. Models for directed graphs 70 Chapter 4. Duplication Models for Biological Networks 75 4.1. Biological networks 75 4.2. The duplication model 76 4.3. Expected degrees of a random graph in the duplication model 77 4.4. The convergence of the expected degrees 79 4.5. The generating functions for the expected degrees 83 4.6. Two concentration results for the duplication model 84 4.7. Power law distribution of generalized duplication models 89 Chapter 5. Random Graphs with Given Expected Degrees 91 5.1. The Erd˝ os-Rényi model 91 5.2. The diameter of G n,p 95 iii iv CONTENTS 5.3. A general random graph model 97 5.4. Size, volume and higher order volumes 97 5.5. Basic properties of G(w) 100 5.6. Neighborhood expansion in random graphs 103 5.7. A random power law graph model 107 5.8. Actual versus expected degree sequence 109 Chapter 6. The Rise of the Giant Component 113 6.1. No giant component if w < 1? 114 6.2. Is there a giant component if˜w > 1? 115 6.3. No giant component if˜w < 1? 116 6.4. Existence and uniqueness of the giant component 117 6.5. A lemma on neighborhood growth 126 6.6. The volume of the giant component 129 6.7. Proving the volume estimate of the giant component 131 6.8. Lower bounds for the volume of the giant component 136 6.9. The complement of the giant component and its size 138 6.10. …",
"title": ""
},
{
"docid": "176dc97bd2ce3c1fd7d3a8d6913cff70",
"text": "Packet broadcasting is a form of data communications architecture which can combine the features of packet switching with those of broadcast channels for data communication networks. Much of the basic theory of packet broadcasting has been presented as a byproduct in a sequence of papers with a distinctly practical emphasis. In this paper we provide a unified presentation of packet broadcasting theory. In Section I1 we introduce the theory of packet broadcasting data networks. In Section I11 we provide some theoretical results dealing with the performance of a packet broadcasting network when the users of the network have a variety of data rates. In Section IV we deal with packet broadcasting networks distributed in space, and in Section V we derive some properties of power-limited packet broadcasting channels,showing that the throughput of such channels can approach that of equivalent point-to-point channels.",
"title": ""
},
{
"docid": "e59b7782cefc46191d36ba7f59d2f2b8",
"text": "Music is capable of evoking exceptionally strong emotions and of reliably affecting the mood of individuals. Functional neuroimaging and lesion studies show that music-evoked emotions can modulate activity in virtually all limbic and paralimbic brain structures. These structures are crucially involved in the initiation, generation, detection, maintenance, regulation and termination of emotions that have survival value for the individual and the species. Therefore, at least some music-evoked emotions involve the very core of evolutionarily adaptive neuroaffective mechanisms. Because dysfunctions in these structures are related to emotional disorders, a better understanding of music-evoked emotions and their neural correlates can lead to a more systematic and effective use of music in therapy.",
"title": ""
},
{
"docid": "172ee4ea5615c415423b4224baa31d86",
"text": "Many companies are deploying services largely based on machine-learning algorithms for sophisticated processing of large amounts of data, either for consumers or industry. The state-of-the-art and most popular such machine-learning algorithms are Convolutional and Deep Neural Networks (CNNs and DNNs), which are known to be computationally and memory intensive. A number of neural network accelerators have been recently proposed which can offer high computational capacity/area ratio, but which remain hampered by memory accesses. However, unlike the memory wall faced by processors on general-purpose workloads, the CNNs and DNNs memory footprint, while large, is not beyond the capability of the on-chip storage of a multi-chip system. This property, combined with the CNN/DNN algorithmic characteristics, can lead to high internal bandwidth and low external communications, which can in turn enable high-degree parallelism at a reasonable area cost. In this article, we introduce a custom multi-chip machine-learning architecture along those lines, and evaluate performance by integrating electrical and optical inter-chip interconnects separately. We show that, on a subset of the largest known neural network layers, it is possible to achieve a speedup of 656.63× over a GPU, and reduce the energy by 184.05× on average for a 64-chip system. We implement the node down to the place and route at 28 nm, containing a combination of custom storage and computational units, with electrical inter-chip interconnects.",
"title": ""
},
{
"docid": "22e21aab5d41c84a26bc09f9b7402efa",
"text": "Skeem for their thoughtful comments and suggestions.",
"title": ""
},
{
"docid": "381c02fb1ce523ddbdfe3acdde20abf1",
"text": "Domain-specific accelerators (DSAs), which sacrifice programmability for efficiency, are a reaction to the waning benefits of device scaling. This article demonstrates that there are commonalities between DSAs that can be exploited with programmable mechanisms. The goals are to create a programmable architecture that can match the benefits of a DSA and to create a platform for future accelerator investigations.",
"title": ""
},
{
"docid": "be311c7a047a18fbddbab120aa97a374",
"text": "This paper presents a novel mechatronics master-slave setup for hand telerehabilitation. The system consists of a sensorized glove acting as a remote master and a powered hand exoskeleton acting as a slave. The proposed architecture presents three main innovative solutions. First, it provides the therapist with an intuitive interface (a sensorized wearable glove) for conducting the rehabilitation exercises. Second, the patient can benefit from a robot-aided physical rehabilitation in which the slave hand robotic exoskeleton can provide an effective treatment outside the clinical environment without the physical presence of the therapist. Third, the mechatronics setup is integrated with a sensorized object, which allows for the execution of manipulation exercises and the recording of patient's improvements. In this paper, we also present the results of the experimental characterization carried out to verify the system usability of the proposed architecture with healthy volunteers.",
"title": ""
},
{
"docid": "20eaba97d10335134fa79835966643ba",
"text": "Limited research has been done on exoskeletons to enable faster movements of the lower extremities. An exoskeleton’s mechanism can actually hinder agility by adding weight, inertia and friction to the legs; compensating inertia through control is particularly difficult due to instability issues. The added inertia will reduce the natural frequency of the legs, probably leading to lower step frequency during walking. We present a control method that produces an approximate compensation of an exoskeleton’s inertia. The aim is making the natural frequency of the exoskeleton-assisted leg larger than that of the unaided leg. The method uses admittance control to compensate the weight and friction of the exoskeleton. Inertia compensation is emulated by adding a feedback loop consisting of low-pass filtered acceleration multiplied by a negative gain. This gain simulates negative inertia in the low-frequency range. We tested the controller on a statically supported, single-DOF exoskeleton that assists swing movements of the leg. Subjects performed movement sequences, first unassisted and then using the exoskeleton, in the context of a computer-based task resembling a race. With zero inertia compensation, the steady-state frequency of leg swing was consistently reduced. Adding inertia compensation enabled subjects to recover their normal frequency of swing.",
"title": ""
},
{
"docid": "49e148ddb4c5798c157e8568c10fae3d",
"text": "Aesthetic quality estimation of an image is a challenging task. In this paper, we introduce a deep CNN approach to tackle this problem. We adopt the sate-of-the-art object-recognition CNN as our baseline model, and adapt it for handling several high-level attributes. The networks capable of dealing with these high-level concepts are then fused by a learned logical connector for predicting the aesthetic rating. Results on the standard benchmark shows the effectiveness of our approach.",
"title": ""
},
{
"docid": "644d2fcc7f2514252c2b9da01bb1ef42",
"text": "We now described an interesting application of SVD to text do cuments. Suppose we represent documents as a bag of words, soXij is the number of times word j occurs in document i, for j = 1 : W andi = 1 : D, where W is the number of words and D is the number of documents. To find a document that contains a g iven word, we can use standard search procedures, but this can get confuse d by ynonomy (different words with the same meaning) andpolysemy (same word with different meanings). An alternative approa ch is to assume that X was generated by some low dimensional latent representation X̂ ∈ IR, whereK is the number of latent dimensions. If we compare documents in the latent space, we should get improved retrie val performance, because words of similar meaning get mapped to similar low dimensional locations. We can compute a low dimensional representation of X by computing the SVD, and then taking the top k singular values/ vectors: 1",
"title": ""
},
{
"docid": "1aa89c7b8be417345d78d1657d5f487f",
"text": "This paper proposes a new novel snubberless current-fed half-bridge front-end isolated dc/dc converter-based inverter for photovoltaic applications. It is suitable for grid-tied (utility interface) as well as off-grid (standalone) application based on the mode of control. The proposed converter attains clamping of the device voltage by secondary modulation, thus eliminating the need of snubber or active-clamp. Zero-current switching or natural commutation of primary devices and zero-voltage switching of secondary devices is achieved. Soft-switching is inherent owing to the proposed secondary modulation and is maintained during wide variation in voltage and power transfer capacity and thus is suitable for photovoltaic (PV) applications. Primary device voltage is clamped at reflected output voltage, and secondary device voltage is clamped at output voltage. Steady-state operation and analysis, and design procedure are presented. Simulation results using PSIM 9.0 are given to verify the proposed analysis and design. An experimental converter prototype rated at 200 W has been designed, built, and tested in the laboratory to verify and demonstrate the converter performance over wide variations in input voltage and output power for PV applications. The proposed converter is a true isolated boost converter and has higher voltage conversion (boost) ratio compared to the conventional active-clamped converter.",
"title": ""
},
{
"docid": "ab813ff20324600d5b765377588c9475",
"text": "Estimating the flows of rivers can have significant economic impact, as this can help in agricultural water management and in protection from water shortages and possible flood damage. The first goal of this paper is to apply neural networks to the problem of forecasting the flow of the River Nile in Egypt. The second goal of the paper is to utilize the time series as a benchmark to compare between several neural-network forecasting methods.We compare between four different methods to preprocess the inputs and outputs, including a novel method proposed here based on the discrete Fourier series. We also compare between three different methods for the multistep ahead forecast problem: the direct method, the recursive method, and the recursive method trained using a backpropagation through time scheme. We also include a theoretical comparison between these three methods. The final comparison is between different methods to perform longer horizon forecast, and that includes ways to partition the problem into the several subproblems of forecasting K steps ahead.",
"title": ""
}
] |
scidocsrr
|
135f4254a084e49e8850309c718021a9
|
Simulation of a photovoltaic panels by using Matlab/Simulink
|
[
{
"docid": "82bea5203ab102bbef0b8663d999abb2",
"text": "This paper proposes a novel simplified two-diode model of a photovoltaic (PV) module. The main aim of this study is to represent a PV module as an ideal two-diode model. In order to reduce computational time, the proposed model has a photocurrent source, i.e., two ideal diodes, neglecting the series and shunt resistances. Only four unknown parameters from the datasheet are required in order to analyze the proposed model. The simulation results that are obtained by MATLAB/Simulink are validated with experimental data of a commercial PV module, using different PV technologies such as multicrystalline and monocrystalline, supplied by the manufacturer. It is envisaged that this work can be useful for professionals who require a simple and accurate PV simulator for their design.",
"title": ""
}
] |
[
{
"docid": "a9ea1f1f94a26181addac948837c3030",
"text": "Crime tends to clust er geographi cally. This has led to the wide usage of hotspot analysis to identify and visualize crime. Accurately identified crime hotspots can greatly benefit the public by creating accurate threat visualizations, more efficiently allocating police resources, and predicting crime. Yet existing mapping methods usually identify hotspots without considering the underlying correlates of crime. In this study, we introduce a spatial data mining framework to study crime hotspots through their related variables. We use Geospatial Discriminative Patterns (GDPatterns) to capture the significant difference between two classes (hotspots and normal areas) in a geo-spatial dataset. Utilizing GDPatterns, we develop a novel model—Hotspot Optimization Tool (HOT)—to improve the identification of crime hotspots. Finally, based on a similarity measure, we group GDPattern clusters and visualize the distribution and characteristics of crime related variables. We evaluate our approach using a real world dataset collected from a northeast city in the United States. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cfd8458a802341eb20ffc14644cd9fad",
"text": "Wireless Sensor Networks (WSNs) are crucial in supporting continuous environmental monitoring, where sensor nodes are deployed and must remain operational to collect and transfer data from the environment to a base-station. However, sensor nodes have limited energy in their primary power storage unit, and this energy may be quickly drained if the sensor node remains operational over long periods of time. Therefore, the idea of harvesting ambient energy from the immediate surroundings of the deployed sensors, to recharge the batteries and to directly power the sensor nodes, has recently been proposed. The deployment of energy harvesting in environmental field systems eliminates the dependency of sensor nodes on battery power, drastically reducing the maintenance costs required to replace batteries. In this article, we review the state-of-the-art in energy-harvesting WSNs for environmental monitoring applications, including Animal Tracking, Air Quality Monitoring, Water Quality Monitoring, and Disaster Monitoring to improve the ecosystem and human life. In addition to presenting the technologies for harvesting energy from ambient sources and the protocols that can take advantage of the harvested energy, we present challenges that must be addressed to further advance energy-harvesting-based WSNs, along with some future work directions to address these challenges.",
"title": ""
},
{
"docid": "d79d6dd8267c66ad98f33bd54ff68693",
"text": "We propose a multigrid extension of convolutional neural networks (CNNs). Rather than manipulating representations living on a single spatial grid, our network layers operate across scale space, on a pyramid of grids. They consume multigrid inputs and produce multigrid outputs, convolutional filters themselves have both within-scale and cross-scale extent. This aspect is distinct from simple multiscale designs, which only process the input at different scales. Viewed in terms of information flow, a multigrid network passes messages across a spatial pyramid. As a consequence, receptive field size grows exponentially with depth, facilitating rapid integration of context. Most critically, multigrid structure enables networks to learn internal attention and dynamic routing mechanisms, and use them to accomplish tasks on which modern CNNs fail. Experiments demonstrate wide-ranging performance advantages of multigrid. On CIFAR and ImageNet classification tasks, flipping from a single grid to multigrid within the standard CNN paradigm improves accuracy, while being compute and parameter efficient. Multigrid is independent of other architectural choices, we show synergy in combination with residual connections. Multigrid yields dramatic improvement on a synthetic semantic segmentation dataset. Most strikingly, relatively shallow multigrid networks can learn to directly perform spatial transformation tasks, where, in contrast, current CNNs fail. Together, our results suggest that continuous evolution of features on a multigrid pyramid is a more powerful alternative to existing CNN designs on a flat grid.",
"title": ""
},
{
"docid": "bb408cedbb0fc32f44326eff7a7390f7",
"text": "A fully integrated SONET OC-192 transmitter IC using a standard CMOS process consists of an input data register, FIFO, CMU, and 16:1 multiplexer to give a 10Gb/s serial output. A higher FEC rate, 10.7Gb/s, is supported. This chip, using a 0.18/spl mu/m process, exceeds SONET requirements, dissipating 450mW.",
"title": ""
},
{
"docid": "6001982cb50621fe488034d6475d1894",
"text": "Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100.",
"title": ""
},
{
"docid": "9e208e6beed62575a92f32031b7af8ad",
"text": "Recently, interests on cleaning robots workable in pipes (termed as in-pipe cleaning robot) are increasing because Garbage Automatic Collection Facilities (i.e, GACF) are widely being installed in Seoul metropolitan area of Korea. So far research on in-pipe robot has been focused on inspection rather than cleaning. In GACF, when garbage is moving, we have to remove the impurities which are stuck to the inner face of the pipe (diameter: 300mm or 400mm). Thus, in this paper, by using TRIZ (Inventive Theory of Problem Solving in Russian abbreviation), we will propose an in-pipe cleaning robot of GACF with the 6-link sliding mechanism which can be adjusted to fit into the inner face of pipe using pneumatic pressure(not spring). The proposed in-pipe cleaning robot for GACF can have forward/backward movement itself as well as rotation of brush in cleaning. The robot body should have the limited size suitable for the smaller pipe with diameter of 300mm. In addition, for the pipe with diameter of 400mm, the links of robot should stretch to fit into the diameter of the pipe by using the sliding mechanism. Based on the conceptual design using TRIZ, we will set up the initial design of the robot in collaboration with a field engineer of Robot Valley, Inc. in Korea. For the optimal design of in-pipe cleaning robot, the maximum impulsive force of collision between the robot and the inner face of pipe is simulated by using RecurDyn® when the link of sliding mechanism is stretched to fit into the 400mm diameter of the pipe. The stresses exerted on the 6 links of sliding mechanism by the maximum impulsive force will be simulated by using ANSYS® Workbench based on the Design Of Experiment(in short DOE). Finally the optimal dimensions including thicknesses of 4 links will be decided in order to have the best safety factor as 2 in this paper as well as having the minimum mass of 4 links. It will be verified that the optimal design of 4 links has the best safety factor close to 2 as well as having the minimum mass of 4 links, compared with the initial design performed by the expert of Robot Valley, Inc. In addition, the prototype of in-pipe cleaning robot will be stated with further research.",
"title": ""
},
{
"docid": "b9087793bd9bcc37deef95d1eea09f25",
"text": "BACKGROUND\nDolutegravir (GSK1349572), a once-daily HIV integrase inhibitor, has shown potent antiviral response and a favourable safety profile. We evaluated safety, efficacy, and emergent resistance in antiretroviral-experienced, integrase-inhibitor-naive adults with HIV-1 with at least two-class drug resistance.\n\n\nMETHODS\nING111762 (SAILING) is a 48 week, phase 3, randomised, double-blind, active-controlled, non-inferiority study that began in October, 2010. Eligible patients had two consecutive plasma HIV-1 RNA assessments of 400 copies per mL or higher (unless >1000 copies per mL at screening), resistance to two or more classes of antiretroviral drugs, and had one to two fully active drugs for background therapy. Participants were randomly assigned (1:1) to once-daily dolutegravir 50 mg or twice-daily raltegravir 400 mg, with investigator-selected background therapy. Matching placebo was given, and study sites were masked to treatment assignment. The primary endpoint was the proportion of patients with plasma HIV-1 RNA less than 50 copies per mL at week 48, evaluated in all participants randomly assigned to treatment groups who received at least one dose of study drug, excluding participants at one site with violations of good clinical practice. Non-inferiority was prespecified with a 12% margin; if non-inferiority was established, then superiority would be tested per a prespecified sequential testing procedure. A key prespecified secondary endpoint was the proportion of patients with treatment-emergent integrase-inhibitor resistance. The trial is registered at ClinicalTrials.gov, NCT01231516.\n\n\nFINDINGS\nAnalysis included 715 patients (354 dolutegravir; 361 raltegravir). At week 48, 251 (71%) patients on dolutegravir had HIV-1 RNA less than 50 copies per mL versus 230 (64%) patients on raltegravir (adjusted difference 7·4%, 95% CI 0·7 to 14·2); superiority of dolutegravir versus raltegravir was then concluded (p=0·03). Significantly fewer patients had virological failure with treatment-emergent integrase-inhibitor resistance on dolutegravir (four vs 17 patients; adjusted difference -3·7%, 95% CI -6·1 to -1·2; p=0·003). Adverse event frequencies were similar across groups; the most commonly reported events for dolutegravir versus raltegravir were diarrhoea (71 [20%] vs 64 [18%] patients), upper respiratory tract infection (38 [11%] vs 29 [8%]), and headache (33 [9%] vs 31 [9%]). Safety events leading to discontinuation were infrequent in both groups (nine [3%] dolutegravir, 14 [4%] raltegravir).\n\n\nINTERPRETATION\nOnce-daily dolutegravir, in combination with up to two other antiretroviral drugs, is well tolerated with greater virological effect compared with twice-daily raltegravir in this treatment-experienced patient group.\n\n\nFUNDING\nViiV Healthcare.",
"title": ""
},
{
"docid": "93da542bb389c9ef6177f0cce6d6ad79",
"text": "Public private partnerships (PPP) are long lasting contracts, generally involving large sunk investments, and developed in contexts of great uncertainty. If uncertainty is taken as an assumption, rather as a threat, it could be used as an opportunity. This requires managerial flexibility. The paper addresses the concept of contract flexibility as well as the several possibilities for its incorporation into PPP development. Based upon existing classifications, the authors propose a double entry matrix as a new model for contract flexibility. A case study has been selected – a hospital – to assess and evaluate the benefits of developing a flexible contract, building a model based on the real options theory. The evidence supports the initial thesis that allowing the concessionaire to adapt, under certain boundaries, the infrastructure and services to changing conditions when new information is known, does increase the value of the project. Some policy implications are drawn. © 2012 Elsevier Ltd. APM and IPMA. All rights reserved.",
"title": ""
},
{
"docid": "4292a60a5f76fd3e794ce67d2ed6bde3",
"text": "If two translation systems differ differ in performance on a test set, can we trust that this indicates a difference in true system quality? To answer this question, we describe bootstrap resampling methods to compute statistical significance of test results, and validate them on the concrete example of the BLEU score. Even for small test sizes of only 300 sentences, our methods may give us assurances that test result differences are real.",
"title": ""
},
{
"docid": "9201fc08a8479c6ef0908c3aeb12e5fe",
"text": "Twitter is one of the most popular social media platforms that has 313 million monthly active users which post 500 million tweets per day. This popularity attracts the attention of spammers who use Twitter for their malicious aims such as phishing legitimate users or spreading malicious software and advertises through URLs shared within tweets, aggressively follow/unfollow legitimate users and hijack trending topics to attract their attention, propagating pornography. In August of 2014, Twitter revealed that 8.5% of its monthly active users which equals approximately 23 million users have automatically contacted their servers for regular updates. Thus, detecting and filtering spammers from legitimate users are mandatory in order to provide a spam-free environment in Twitter. In this paper, features of Twitter spam detection presented with discussing their effectiveness. Also, Twitter spam detection methods are categorized and discussed with their pros and cons. The outdated features of Twitter which are commonly used by Twitter spam detection approaches are highlighted. Some new features of Twitter which, to the best of our knowledge, have not been mentioned by any other works are also presented. Keywords—Twitter spam; spam detection; spam filtering;",
"title": ""
},
{
"docid": "c1a96dbed9373dddd0a7a07770395a7e",
"text": "Mobile devices are increasingly the dominant Internet access technology. Nevertheless, high costs, data caps, and throttling are a source of widespread frustration, and a significant barrier to adoption in emerging markets. This paper presents Flywheel, an HTTP proxy service that extends the life of mobile data plans by compressing responses in-flight between origin servers and client browsers. Flywheel is integrated with the Chrome web browser and reduces the size of proxied web pages by 50% for a median user. We report measurement results from millions of users as well as experience gained during three years of operating and evolving the production",
"title": ""
},
{
"docid": "b46a9871dc64327f1ab79fa22de084ce",
"text": "Traditional address scanning attacks mainly rely on the naive 'brute forcing' approach, where the entire IPv4 address space is exhaustively searched by enumerating different possibilities. However, such an approach is inefficient for IPv6 due to its vast subnet size (i.e., 2^64). As a result, it is widely assumed that address scanning attacks are less feasible in IPv6 networks. In this paper, we evaluate new IPv6 reconnaissance techniques in real IPv6 networks and expose how to leverage the Domain Name System (DNS) for IPv6 network reconnaissance. We collected IPv6 addresses from 5 regions and 100,000 domains by exploiting DNS reverse zone and DNSSEC records. We propose a DNS Guard (DNSG) to efficiently detect DNS reconnaissance attacks in IPv6 networks. DNSG is a plug and play component that could be added to the existing infrastructure. We implement DNSG using Bro and Suricata. Our results demonstrate that DNSG could effectively block DNS reconnaissance attacks.",
"title": ""
},
{
"docid": "64411f1f8a998c9b23b9641fe1917db4",
"text": "Microwave power transmission (MPT) has had a long history before the more recent movement toward wireless power transmission (WPT). MPT can be applied not only to beam-type point-to-point WPT but also to an energy harvesting system fed from distributed or broadcasting radio waves. The key technology is the use of a rectenna, or rectifying antenna, to convert a microwave signal to a DC signal with high efficiency. In this paper, various rectennas suitable for MPT are discussed, including various rectifying circuits, frequency rectennas, and power rectennas.",
"title": ""
},
{
"docid": "0734e55ef60e9e1ef490c03a23f017e8",
"text": "High-voltage (HV) pulses are used in pulsed electric field (PEF) applications to provide an effective electroporation process, a process in which harmful microorganisms are disinfected when subjected to a PEF. Depending on the PEF application, different HV pulse specifications are required such as the pulse-waveform shape, the voltage magnitude, the pulse duration, and the pulse repetition rate. In this paper, a generic pulse-waveform generator (GPG) is proposed, and the GPG topology is based on half-bridge modular multilevel converter (HB-MMC) cells. The GPG topology is formed of four identical arms of series-connected HB-MMC cells forming an H-bridge. Unlike the conventional HB-MMC-based converters in HVdc transmission, the GPG load power flow is not continuous which leads to smaller size cell capacitors utilization; hence, smaller footprint of the GPG is achieved. The GPG topology flexibility allows the controller software to generate a basic multilevel waveform which can be manipulated to generate the commonly used PEF pulse waveforms. Therefore, the proposed topology offers modularity, redundancy, and scalability. The viability of the proposed GPG converter is validated by MATLAB/Simulink simulation and experimentation.",
"title": ""
},
{
"docid": "b395aa3ae750ddfd508877c30bae3a38",
"text": "This paper presents a technology review of voltage-source-converter topologies for industrial medium-voltage drives. In this highly active area, different converter topologies and circuits have found their application in the market. This paper covers the high-power voltage-source inverter and the most used multilevel-inverter topologies, including the neutral-point-clamped, cascaded H-bridge, and flying-capacitor converters. This paper presents the operating principle of each topology and a review of the most relevant modulation methods, focused mainly on those used by industry. In addition, the latest advances and future trends of the technology are discussed. It is concluded that the topology and modulation-method selection are closely related to each particular application, leaving a space on the market for all the different solutions, depending on their unique features and limitations like power or voltage level, dynamic performance, reliability, costs, and other technical specifications.",
"title": ""
},
{
"docid": "b80151949d837ffffdc680e9822b9691",
"text": "Neuronal activity causes local changes in cerebral blood flow, blood volume, and blood oxygenation. Magnetic resonance imaging (MRI) techniques sensitive to changes in cerebral blood flow and blood oxygenation were developed by high-speed echo planar imaging. These techniques were used to obtain completely noninvasive tomographic maps of human brain activity, by using visual and motor stimulus paradigms. Changes in blood oxygenation were detected by using a gradient echo (GE) imaging sequence sensitive to the paramagnetic state of deoxygenated hemoglobin. Blood flow changes were evaluated by a spin-echo inversion recovery (IR), tissue relaxation parameter T1-sensitive pulse sequence. A series of images were acquired continuously with the same imaging pulse sequence (either GE or IR) during task activation. Cine display of subtraction images (activated minus baseline) directly demonstrates activity-induced changes in brain MR signal observed at a temporal resolution of seconds. During 8-Hz patterned-flash photic stimulation, a significant increase in signal intensity (paired t test; P less than 0.001) of 1.8% +/- 0.8% (GE) and 1.8% +/- 0.9% (IR) was observed in the primary visual cortex (V1) of seven normal volunteers. The mean rise-time constant of the signal change was 4.4 +/- 2.2 s for the GE images and 8.9 +/- 2.8 s for the IR images. The stimulation frequency dependence of visual activation agrees with previous positron emission tomography observations, with the largest MR signal response occurring at 8 Hz. Similar signal changes were observed within the human primary motor cortex (M1) during a hand squeezing task and in animal models of increased blood flow by hypercapnia. By using intrinsic blood-tissue contrast, functional MRI opens a spatial-temporal window onto individual brain physiology.",
"title": ""
},
{
"docid": "37a574d4d969fc681c93508bd14cc904",
"text": "A new low offset dynamic comparator for high resolution high speed analog-to-digital application has been designed. Inputs are reconfigured from the typical differential pair comparator such that near equal current distribution in the input transistors can be achieved for a meta-stable point of the comparator. Restricted signal swing clock for the tail current is also used to ensure constant currents in the differential pairs. Simulation based sensitivity analysis is performed to demonstrate the robustness of the new comparator with respect to stray capacitances, common mode voltage errors and timing errors in a TSMC 0.18mu process. Less than 10mV offset can be easily achieved with the proposed structure making it favorable for flash and pipeline data conversion applications",
"title": ""
},
{
"docid": "f2742f6876bdede7a67f4ec63d73ead9",
"text": "Momentum methods play a central role in optimization. Several momentum methods are provably optimal, and all use a technique called estimate sequences to analyze their convergence properties. The technique of estimate sequences has long been considered difficult to understand, leading many researchers to generate alternative, “more intuitive” methods and analyses. In this paper we show there is an equivalence between the technique of estimate sequences and a family of Lyapunov functions in both continuous and discrete time. This framework allows us to develop a simple and unified analysis of many existing momentum algorithms, introduce several new algorithms, and most importantly, strengthen the connection between algorithms and continuous-time dynamical systems.",
"title": ""
},
{
"docid": "eeee6fceaec33b4b1ef5aed9f8b0dcf5",
"text": "This paper presents a novel orthomode transducer (OMT) with the dimension of WR-10 waveguide. The internal structure of the OMT is in the shape of Y so we named it a Y-junction OMT, it contain one square waveguide port with the dimension 2.54mm × 2.54mm and two WR-10 rectangular waveguide ports with the dimension of 1.27mm × 2.54mm. The operating frequency band of OMT is 70-95GHz (more than 30% bandwidth) with simulated insertion loss <;-0.3dB and cross polarization better than -40dB throughout the band for both TE10 and TE01 modes.",
"title": ""
}
] |
scidocsrr
|
8ab4c7502d246208fbb03518c0a34b02
|
Probabilistic sentential decision diagrams: Learning with massive logical constraints
|
[
{
"docid": "25346cdef3e97173dab5b5499c4d4567",
"text": "The key limiting factor in graphical model inference and learning is the complexity of the partition function. We thus ask the question: what are the most general conditions under which the partition function is tractable? The answer leads to a new kind of deep architecture, which we call sum-product networks (SPNs) and will present in this abstract.",
"title": ""
},
{
"docid": "d104206fd95525192240e9a6d6aedd89",
"text": "Graphical models are usually learned without regard to the cost of doing inference with them. As a result, even if a good model is learned, it may perform poorly at prediction, because it requires approximate inference. We propose an alternative: learning models with a score function that directly penalizes the cost of inference. Specifically, we learn arithmetic circuits with a penalty on the number of edges in the circuit (in which the cost of inference is linear). Our algorithm is equivalent to learning a Bayesian network with context-specific independence by greedily splitting conditional distributions, at each step scoring the candidates by compiling the resulting network into an arithmetic circuit, and using its size as the penalty. We show how this can be done efficiently, without compiling a circuit from scratch for each candidate. Experiments on several real-world domains show that our algorithm is able to learn tractable models with very large treewidth, and yields more accurate predictions than a standard context-specific Bayesian network learner, in far less time.",
"title": ""
}
] |
[
{
"docid": "bd8788c3d4adc5f3671f741e884c7f34",
"text": "We present a method for learning an embedding that places images of humans in similar poses nearby. This embedding can be used as a direct method of comparing images based on human pose, avoiding potential challenges of estimating body joint positions. Pose embedding learning is formulated under a triplet-based distance criterion. A deep architecture is used to allow learning of a representation capable of making distinctions between different poses. Experiments on human pose matching and retrieval from video data demonstrate the potential of the method.",
"title": ""
},
{
"docid": "94c6f94e805a366c6fa6f995f13a92ba",
"text": "Unusual site deep vein thrombosis (USDVT) is an uncommon form of venous thromboembolism (VTE) with heterogeneity in pathophysiology and clinical features. While the need for anticoagulation treatment is generally accepted, there is little data on optimal USDVT treatment. The TRUST study aimed to characterize the epidemiology, treatment and outcomes of USDVT. From 2008 to 2012, 152 patients were prospectively enrolled at 4 Canadian centers. After baseline, patients were followed at 6, 12 and 24months. There were 97 (64%) cases of splanchnic, 33 (22%) cerebral, 14 (9%) jugular, 6 (4%) ovarian and 2 (1%) renal vein thrombosis. Mean age was 52.9years and 113 (74%) cases were symptomatic. Of 72 (47%) patients tested as part of clinical care, 22 (31%) were diagnosed with new thrombophilia. Of 138 patients evaluated in follow-up, 66 (48%) completed at least 6months of anticoagulation. Estrogen exposure or inflammatory conditions preceding USDVT were commonly associated with treatment discontinuation before 6months, while previous VTE was associated with continuing anticoagulation beyond 6months. During follow-up, there were 22 (16%) deaths (20 from cancer), 4 (3%) cases of recurrent VTE and no fatal bleeding events. Despite half of USDVT patients receiving <6months of anticoagulation, the rate of VTE recurrence was low and anticoagulant treatment appears safe. Thrombophilia testing was common and thrombophilia prevalence was high. Further research is needed to determine the optimal investigation and management of USDVT.",
"title": ""
},
{
"docid": "473d8cbcd597c961819c5be6ab2e658e",
"text": "Mobile terrestrial laser scanners (MTLS), based on light detection and ranging sensors, are used worldwide in agricultural applications. MTLS are applied to characterize the geometry and the structure of plants and crops for technical and scientific purposes. Although MTLS exhibit outstanding performance, their high cost is still a drawback for most agricultural applications. This paper presents a low-cost alternative to MTLS based on the combination of a Kinect v2 depth sensor and a real time kinematic global navigation satellite system (GNSS) with extended color information capability. The theoretical foundations of this system are exposed along with some experimental results illustrating their performance and limitations. This study is focused on open-field agricultural applications, although most conclusions can also be extrapolated to similar outdoor uses. The developed Kinect-based MTLS system allows to select different acquisition frequencies and fields of view (FOV), from one to 512 vertical slices. The authors conclude that the better performance is obtained when a FOV of a single slice is used, but at the price of a very low measuring speed. With that particular configuration, plants, crops, and objects are reproduced accurately. Future efforts will be directed to increase the scanning efficiency by improving both the hardware and software components and to make it feasible using both partial and full FOV.",
"title": ""
},
{
"docid": "9d26d8e8b34319defd89a0daca3969e9",
"text": "The paper presents a safe robot navigation system based on omnidirectional vision. The 360 degree camera images are analyzed for obstacle detection and avoidance and of course for navigating safely in the given indoor environment. This module can process images in real time and extracts the direction and distance information of the obstacles from the camera system mounted on the robot. This two data is the output of the module. Because of the distortions of the omnidirectional vision, it is necessary to calibrate the camera and not only for that but also to get the right direction and distances information. Several image processing methods and technics were used which are investigated in the rest of this paper.",
"title": ""
},
{
"docid": "14ab095775e6687bde93fbe7849475f5",
"text": "In this paper, we present an evolutionary trust game to investigate the formation of trust in the so-called sharing economy from a population perspective. To the best of our knowledge, this is the first attempt to model trust in the sharing economy using the evolutionary game theory framework. Our sharing economy trust model consists of four types of players: a trustworthy provider, an untrustworthy provider, a trustworthy consumer, and an untrustworthy consumer. Through systematic simulation experiments, five different scenarios with varying proportions and types of providers and consumers were considered. Our results show that each type of players influences the existence and survival of other types of players, and untrustworthy players do not necessarily dominate the population even when the temptation to defect (i.e., to be untrustworthy) is high. Our findings may have important implications for understanding the emergence of trust in the context of sharing economy transactions.",
"title": ""
},
{
"docid": "7aca9b586e30a735c51ffe38ef858b38",
"text": "In Chapter 1, Reigeluth described design theory as being different from descriptive theory in that it offers means to achieve goals. For an applied field like education, design theory is more useful and more easily applied than its descriptive counterpart, learning theory. But none of the 22 theories described in this book has yet been developed to a state of perfection; at very least they can all benefit from more detailed guidance for applying their methods to diverse situations. And more theories are sorely needed to provide guidance for additional kinds of learning and human development and for different kinds of situations, including the use of new information technologies as tools. This leads us to the important question, \" What research methods are most helpful for creating and improving instructional design theories? \" In this chapter, we offer a detailed description of one research methodology that holds much promise for generating the kind of knowledge that we believe is most useful to educators—a methodology that several theorists in this book have intuitively used to develop their theories. We refer to this methodology as \"formative research\"—a kind of developmental research or action research that is intended to improve design theory for designing instructional practices or processes. Reigeluth (1989) and Romiszowski (1988) have recommended this approach to expand the knowledge base in instructional-design theory. Newman (1990) has suggested something similar for research on the organizational impact of computers in schools. And Greeno, Collins and Resnick (1996) have identified several groups of researchers who are conducting something similar that they call \" design experiments, \" in which \" researchers and practitioners, particularly teachers, collaborate in the design, implementation, and analysis of changes in practice. \" (p. 15) Formative research has also been used for generating knowledge in as broad an area as systemic change in education We intend for this chapter to help guide educational researchers who are developing and refining instructional-design theories. Most researchers have not had the opportunity to learn formal research methodologies for developing design theories. Doctoral programs in universities tend to emphasize quantitative and qualitative research methodologies for creating descriptive knowledge of education. However, design theories are guidelines for practice, which tell us \"how to do\" education, not \"what is.\" We have found that traditional quantitative research methods (e.g., experiments, surveys, correlational analyses) are not particularly useful for improving instructional-design theory— especially in the early stages of development. Instead, …",
"title": ""
},
{
"docid": "56b3a5ff0295d0ffce2a60dc60c0033a",
"text": "This first installment of the new Human Augmentation department looks at various technologies designed to augment the human intellect and amplify human perception and cognition. Linking back to early work in interactive computing, Albrecht Schmidt considers how novel technologies can create a new relationship between digital technologies and humans.",
"title": ""
},
{
"docid": "16d3a7217182ad331d85eb619fa459ee",
"text": "Pupil diameter was monitored during picture viewing to assess effects of hedonic valence and emotional arousal on pupillary responses. Autonomic activity (heart rate and skin conductance) was concurrently measured to determine whether pupillary changes are mediated by parasympathetic or sympathetic activation. Following an initial light reflex, pupillary changes were larger when viewing emotionally arousing pictures, regardless of whether these were pleasant or unpleasant. Pupillary changes during picture viewing covaried with skin conductance change, supporting the interpretation that sympathetic nervous system activity modulates these changes in the context of affective picture viewing. Taken together, the data provide strong support for the hypothesis that the pupil's response during affective picture viewing reflects emotional arousal associated with increased sympathetic activity.",
"title": ""
},
{
"docid": "b12defb3d9d7c5ccda8c3e0b0858f55f",
"text": "We investigate a simple yet effective method to introduce inhibitory and excitatory interactions between units in the layers of a deep neural network classifier. The method is based on the greedy layer-wise procedure of deep learning algorithms and extends the denoising autoencoder (Vincent et al., 2008) by adding asymmetric lateral connections between its hidden coding units, in a manner that is much simpler and computationally more efficient than previously proposed approaches. We present experiments on two character recognition problems which show for the first time that lateral connections can significantly improve the classification performance of deep networks.",
"title": ""
},
{
"docid": "088fdd091c2cc70f2e000622be4f3c62",
"text": "Data is currently one of the most important assets for companies in every field. The continuous growth in the importance and volume of data has created a new problem: it cannot be handled by traditional analysis techniques. This problem was, therefore, solved through the creation of a new paradigm: Big Data. However, Big Data originated new issues related not only to the volume or the variety of the data, but also to data security and privacy. In order to obtain a full perspective of the problem, we decided to carry out an investigation with the objective of highlighting the main issues regarding Big Data security, and also the solutions proposed by the scientific community to solve them. In this paper, we explain the results obtained after applying a systematic mapping study to security in the Big Data ecosystem. It is almost impossible to carry out detailed research into the entire topic of security, and the outcome of this research is, therefore, a big picture of the main problems related to security in a Big Data system, along with the principal solutions to them proposed by the research community.",
"title": ""
},
{
"docid": "77e61d56d297b62e1078542fd74ffe5e",
"text": "This paper introduces a complete design method to construct an adaptive fuzzy logic controller (AFLC) for DC–DC converter. In a conventional fuzzy logic controller (FLC), knowledge on the system supplied by an expert is required for developing membership functions (parameters) and control rules. The proposed AFLC, on the other hand, do not required expert for making parameters and control rules. Instead, parameters and rules are generated using a model data file, which contains summary of input–output pairs. The FLC use Mamdani type fuzzy logic controllers for the defuzzification strategy and inference operators. The proposed controller is designed and verified by digital computer simulation and then implemented for buck, boost and buck–boost converters by using an 8-bit microcontroller. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e1edaf3e8754e8403b9be29f58ba3550",
"text": "This paper presents a simulation framework for pathological gait assistance with a hip exoskeleton. Previously we had developed an event-driven controller for gait assistance [1]. We now simulate (or optimize) the gait assistance in ankle pathologies (e.g., weak dorsiflexion or plantarflexion). It is done by 1) utilizing the neuromuscular walking model, 2) parameterizing assistive torques for swing and stance legs, and 3) performing dynamic optimizations that takes into account the human-robot interactive dynamics. We evaluate the energy expenditures and walking parameters for the different gait types. Results show that each gait type should have a different assistance strategy comparing with the assistance of normal gait. Although we need further studies about the pathologies, our simulation model is feasible to design the gait assistance for the ankle muscle weaknesses.",
"title": ""
},
{
"docid": "22c6ae71c708d5e2d1bc7e5e085c4842",
"text": "Head pose estimation is a fundamental task for face and social related research. Although 3D morphable model (3DMM) based methods relying on depth information usually achieve accurate results, they usually require frontal or mid-profile poses which preclude a large set of applications where such conditions can not be garanteed, like monitoring natural interactions from fixed sensors placed in the environment. A major reason is that 3DMM models usually only cover the face region. In this paper, we present a framework which combines the strengths of a 3DMM model fitted online with a prior-free reconstruction of a 3D full head model providing support for pose estimation from any viewpoint. In addition, we also proposes a symmetry regularizer for accurate 3DMM fitting under partial observations, and exploit visual tracking to address natural head dynamics with fast accelerations. Extensive experiments show that our method achieves state-of-the-art performance on the public BIWI dataset, as well as accurate and robust results on UbiPose, an annotated dataset of natural interactions that we make public and where adverse poses, occlusions or fast motions regularly occur.",
"title": ""
},
{
"docid": "a4a56e0647849c22b48e7e5dc3f3049b",
"text": "The paper describes a 2D sound source mapping system for a mobile robot. We developed a multiple sound sources localization method for a mobile robot with a 32 channel concentric microphone array. The system can separate multiple moving sound sources using direction localization. Directional localization and separation of different pressure sound sources is achieved using the delay and sum beam forming (DSBF) and the frequency band selection (FBS) algorithm. Sound sources were mapped by using a wheeled robot equipped with the microphone array. The robot localizes sounds direction on the move and estimates sound sources position using triangulation. Assuming the movement of sound sources, the system set a time limit and uses only the last few seconds data. By using the random sample consensus (RANSAC) algorithm for position estimation, we achieved 2D multiple sound source mapping from time limited data with high accuracy. Also, moving sound source separation is experimentally demonstrated with segments of the DSBF enhanced signal derived from the localization process",
"title": ""
},
{
"docid": "53cf85922865609c4a7591bd06679660",
"text": "Speeded visual word naming and lexical decision performance are reported for 2428 words for young adults and healthy older adults. Hierarchical regression techniques were used to investigate the unique predictive variance of phonological features in the onsets, lexical variables (e.g., measures of consistency, frequency, familiarity, neighborhood size, and length), and semantic variables (e.g. imageahility and semantic connectivity). The influence of most variables was highly task dependent, with the results shedding light on recent empirical controversies in the available word recognition literature. Semantic-level variables accounted for unique variance in both speeded naming and lexical decision performance, level with the latter task producing the largest semantic-level effects. Discussion focuses on the utility of large-scale regression studies in providing a complementary approach to the standard factorial designs to investigate visual word recognition.",
"title": ""
},
{
"docid": "0a94a995f91afd641013b97dcec7da2a",
"text": "Two competing encoding concepts are known to scale well with growing amounts of XML data: XPath Accelerator encoding implemented by MonetDB for in-memory documents and X-Hive’s Persistent DOM for on-disk storage. We identified two ways to improve XPath Accelerator and present prototypes for the respective techniques: BaseX boosts inmemory performance with optimized data and value index structures while Idefix introduces native block-oriented persistence with logarithmic update behavior for true scalability, overcoming main-memory constraints. An easy-to-use Java-based benchmarking framework was developed and used to consistently compare these competing techniques and perform scalability measurements. The established XMark benchmark was applied to all four systems under test. Additional fulltext-sensitive queries against the well-known DBLP database complement the XMark results. Not only did the latest version of X-Hive finally surprise with good scalability and performance numbers. Also, both BaseX and Idefix hold their promise to push XPath Accelerator to its limits: BaseX efficiently exploits available main memory to speedup XML queries while Idefix surpasses main-memory constraints and rivals the on-disk leadership of X-Hive. The competition between XPath Accelerator and Persistent DOM definitely is relaunched.",
"title": ""
},
{
"docid": "e56af4a3a8fbef80493d77b441ee1970",
"text": "A new, systematic, simplified design procedure for quasi-Yagi antennas is presented. The design is based on the simple impedance matching among antenna components: i.e., transition, feed, and antenna. This new antenna design is possible due to the newly developed ultra-wideband transition. As design examples, wideband quasi- Yagi antennas are successfully designed and implemented in Ku- and Ka-bands with frequency bandwidths of 53.2% and 29.1%, and antenna gains of 4-5 dBi and 5.2-5.8 dBi, respectively. The design method can be applied to other balanced antennas and their arrays.",
"title": ""
},
{
"docid": "7835a3ecdb9a8563e29ee122e5987503",
"text": "Women diagnosed with complete spinal cord injury (SCI) at T10 or higher report sensations generated by vaginal-cervical mechanical self-stimulation (CSS). In this paper we review brain responses to sexual arousal and orgasm in such women, and further hypothesize that the afferent pathway for this unexpected perception is provided by the Vagus nerves, which bypass the spinal cord. Using functional magnetic resonance imaging (fMRI), we ascertained that the region of the medulla oblongata to which the Vagus nerves project (the Nucleus of the Solitary Tract or NTS) is activated by CSS. We also used an objective measure, CSS-induced analgesia response to experimentally induced finger pain, to ascertain the functionality of this pathway. During CSS, several women experienced orgasms. Brain regions activated during orgasm included the hypothalamic paraventricular nucleus, amygdala, accumbens-bed nucleus of the stria terminalis-preoptic area, hippocampus, basal ganglia (especially putamen), cerebellum, and anterior cingulate, insular, parietal and frontal cortices, and lower brainstem (central gray, mesencephalic reticular formation, and NTS). We conclude that the Vagus nerves provide a spinal cord-bypass pathway for vaginal-cervical sensibility and that activation of this pathway can produce analgesia and orgasm.",
"title": ""
},
{
"docid": "7ee4a708d41065c619a5bf9e86f871a3",
"text": "Cyber attack comes in various approach and forms, either internally or externally. Remote access and spyware are forms of cyber attack leaving an organization to be susceptible to vulnerability. This paper investigates illegal activities and potential evidence of cyber attack through studying the registry on the Windows 7 Home Premium (32 bit) Operating System in using the application Virtual Network Computing (VNC) and keylogger application. The aim is to trace the registry artifacts left by the attacker which connected using Virtual Network Computing (VNC) protocol within Windows 7 Operating System (OS). The analysis of the registry focused on detecting unwanted applications or unauthorized access to the machine with regard to the user activity via the VNC connection for the potential evidence of illegal activities by investigating the Registration Entries file and image file using the Forensic Toolkit (FTK) Imager. The outcome of this study is the findings on the artifacts which correlate to the user activity.",
"title": ""
}
] |
scidocsrr
|
819cab6856ab332744e87d70cdd04247
|
A Supervised Patch-Based Approach for Human Brain Labeling
|
[
{
"docid": "3342e2f79a6bb555797224ac4738e768",
"text": "Regions in three-dimensional magnetic resonance (MR) brain images can be classified using protocols for manually segmenting and labeling structures. For large cohorts, time and expertise requirements make this approach impractical. To achieve automation, an individual segmentation can be propagated to another individual using an anatomical correspondence estimate relating the atlas image to the target image. The accuracy of the resulting target labeling has been limited but can potentially be improved by combining multiple segmentations using decision fusion. We studied segmentation propagation and decision fusion on 30 normal brain MR images, which had been manually segmented into 67 structures. Correspondence estimates were established by nonrigid registration using free-form deformations. Both direct label propagation and an indirect approach were tested. Individual propagations showed an average similarity index (SI) of 0.754+/-0.016 against manual segmentations. Decision fusion using 29 input segmentations increased SI to 0.836+/-0.009. For indirect propagation of a single source via 27 intermediate images, SI was 0.779+/-0.013. We also studied the effect of the decision fusion procedure using a numerical simulation with synthetic input data. The results helped to formulate a model that predicts the quality improvement of fused brain segmentations based on the number of individual propagated segmentations combined. We demonstrate a practicable procedure that exceeds the accuracy of previous automatic methods and can compete with manual delineations.",
"title": ""
},
{
"docid": "6df12ee53551f4a3bd03bca4ca545bf1",
"text": "We present a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set. In contrast to existing segmentation procedures that only label a small number of tissue classes, the current method assigns one of 37 labels to each voxel, including left and right caudate, putamen, pallidum, thalamus, lateral ventricles, hippocampus, and amygdala. The classification technique employs a registration procedure that is robust to anatomical variability, including the ventricular enlargement typically associated with neurological diseases and aging. The technique is shown to be comparable in accuracy to manual labeling, and of sufficient sensitivity to robustly detect changes in the volume of noncortical structures that presage the onset of probable Alzheimer's disease.",
"title": ""
}
] |
[
{
"docid": "097e2c17a34db96ba37f68e28058ceba",
"text": "ARTICLE The healing properties of compassion have been written about for centuries. The Dalai Lama often stresses that if you want others to be happy – focus on compassion; if you want to be happy yourself – focus on compassion (Dalai Lama 1995, 2001). Although all clinicians agree that compassion is central to the doctor–patient and therapist–client relationship, recently the components of com passion have been looked at through the lens of Western psychological science and research 2003a,b). Compassion can be thought of as a skill that one can train in, with in creasing evidence that focusing on and practising com passion can influence neurophysiological and immune systems (Davidson 2003; Lutz 2008). Compassionfocused therapy refers to the under pinning theory and process of applying a compassion model to psy chotherapy. Compassionate mind training refers to specific activities designed to develop compassion ate attributes and skills, particularly those that influence affect regula tion. Compassionfocused therapy adopts the philosophy that our under standing of psychological and neurophysiological processes is developing at such a rapid pace that we are now moving beyond 'schools of psychotherapy' towards a more integrated, biopsycho social science of psycho therapy (Gilbert 2009). Compassionfocused therapy and compassionate mind training arose from a number of observations. First, people with high levels of shame and self criticism can have enormous difficulty in being kind to themselves, feeling selfwarmth or being selfcompassionate. Second, it has long been known that problems of shame and selfcriticism are often rooted in histories of abuse, bullying, high expressed emo tion in the family, neglect and/or lack of affection Individuals subjected to early experiences of this type can become highly sensitive to threats of rejection or criticism from the outside world and can quickly become selfattacking: they experience both their external and internal worlds as easily turning hostile. Third, it has been recognised that working with shame and selfcriticism requires a thera peutic focus on memories of such early experiences And fourth, there are clients who engage with the cognitive and behavioural tasks of a therapy, and become skilled at generating (say) alternatives for their negative thoughts and beliefs, but who still do poorly in therapy (Rector 2000). They are likely to say, 'I understand the logic of my alterna tive thinking but it doesn't really help me feel much better' or 'I know I'm not to blame for the abuse but I still feel that I …",
"title": ""
},
{
"docid": "660fe15405c2006e20bcf0e4358c7283",
"text": "We introduce a framework for feature selection based on depe ndence maximization between the selected features and the labels of an estimation problem, u sing the Hilbert-Schmidt Independence Criterion. The key idea is that good features should be highl y dependent on the labels. Our approach leads to a greedy procedure for feature selection. We show that a number of existing feature selectors are special cases of this framework. Experiments on both artificial and real-world data show that our feature selector works well in practice.",
"title": ""
},
{
"docid": "c6a25dc466e4a22351359f17bd29916c",
"text": "We consider practical methods for adding constraints to the K-Means clustering algorithm in order to avoid local solutions with empty clusters or clusters having very few points. We often observe this phenomena when applying K-Means to datasets where the number of dimensions is n 10 and the number of desired clusters is k 20. We propose explicitly adding k constraints to the underlying clustering optimization problem requiring that each cluster have at least a minimum number of points in it. We then investigate the resulting cluster assignment step. Preliminary numerical tests on real datasets indicate the constrained approach is less prone to poor local solutions, producing a better summary of the underlying data. Contrained K-Means Clustering 1",
"title": ""
},
{
"docid": "7ffbc12161510aa8ef01d804df9c5648",
"text": "Networks represent relationships between entities in many complex systems, spanning from online social interactions to biological cell development and brain connectivity. In many cases, relationships between entities are unambiguously known: are two users “friends” in a social network? Do two researchers collaborate on a published article? Do two road segments in a transportation system intersect? These are directly observable in the system in question. In most cases, relationships between nodes are not directly observable and must be inferred: Does one gene regulate the expression of another? Do two animals who physically co-locate have a social bond? Who infected whom in a disease outbreak in a population?\n Existing approaches for inferring networks from data are found across many application domains and use specialized knowledge to infer and measure the quality of inferred network for a specific task or hypothesis. However, current research lacks a rigorous methodology that employs standard statistical validation on inferred models. In this survey, we examine (1) how network representations are constructed from underlying data, (2) the variety of questions and tasks on these representations over several domains, and (3) validation strategies for measuring the inferred network’s capability of answering questions on the system of interest.",
"title": ""
},
{
"docid": "ef8d88d57858706ba269a8f3aaa989f3",
"text": "The mid 20 century witnessed some serious attempts in studies of play and games with an emphasis on their importance within culture. Most prominently, Johan Huizinga (1944) maintained in his book Homo Ludens that the earliest stage of culture is in the form of play and that culture proceeds in the shape and the mood of play. He also claimed that some elements of play crystallised as knowledge such as folklore, poetry and philosophy as culture advanced.",
"title": ""
},
{
"docid": "43fc8ff9339780cc91762a28e36aaad7",
"text": "The Internet of things(IoT) has brought the vision of the smarter world into reality and including healthcare it has a many application domains. The convergence of IOT-cloud can play a significant role in the smart healthcare by offering better insight of healthcare content to support affordable and quality patient care. In this paper, we proposed a model that allows the sensor to monitor the patient's symptom. The collected monitored data transmitted to the gateway via Bluetooth and then to the cloud server through docker container using the internet. Thus enabling the physician to diagnose and monitor health problems wherever the patient is. Also, we address the several challenges related to health monitoring and management using IoT.",
"title": ""
},
{
"docid": "a1fe64aacbbe80a259feee2874645f09",
"text": "Database consolidation is gaining wide acceptance as a means to reduce the cost and complexity of managing database systems. However, this new trend poses many interesting challenges for understanding and predicting system performance. The consolidated databases in multi-tenant settings share resources and compete with each other for these resources. In this work we present an experimental study to highlight how these interactions can be fairly complex. We argue that individual database staging or workload profiling is not an adequate approach to understanding the performance of the consolidated system. Our initial investigations suggest that machine learning approaches that use monitored data to model the system can work well for important tasks.",
"title": ""
},
{
"docid": "39cde8c4da81d72d7a0ff058edb71409",
"text": "One glaring weakness of Java for numerical programming is its lack of support for complex numbers. Simply creating a Complex number class leads to poor performance relative to Fortran. We show in this paper, however, that the combination of such aComplex class and a compiler that understands its semantics does indeed lead to Fortran-like performance. This performance gain is achieved while leaving the Java language completely unchanged and maintaining full compatibility with existing Java Virtual Machines . We quantify the effectiveness of our approach through experiments with linear algebra, electromagnetics, and computational fluid-dynamics kernels.",
"title": ""
},
{
"docid": "231365d1de30f3529752510ec718dd38",
"text": "The lack of reliability of gliding contacts in highly constrained environments induces manufacturers to develop contactless transmission power systems such as rotary transformers. The following paper proposes an optimal design methodology for rotary transformers supplied from a low-voltage source at high temperatures. The method is based on an accurate multidisciplinary analysis model divided into magnetic, thermal and electrical parts, optimized thanks to a sequential quadratic programming method. The technique is used to discuss the design particularities of rotary transformers. Two optimally designed structures of rotary transformers : an iron silicon coaxial one and a ferrite pot core one, are compared.",
"title": ""
},
{
"docid": "e94183f4200b8c6fef1f18ec0e340869",
"text": "Hoon Sohn Engineering Sciences & Applications Division, Engineering Analysis Group, M/S C926 Los Alamos National Laboratory, Los Alamos, NM 87545 e-mail: [email protected] Charles R. Farrar Engineering Sciences & Applications Division, Engineering Analysis Group, M/S C946 e-mail: [email protected] Norman F. Hunter Engineering Sciences & Applications Division, Measurement Technology Group, M/S C931 e-mail: [email protected] Keith Worden Department of Mechanical Engineering University of Sheffield Mappin St. Sheffield S1 3JD, United Kingdom e-mail: [email protected]",
"title": ""
},
{
"docid": "e677799d3bee1b25e74dc6c547c1b6c2",
"text": "Street View serves millions of Google users daily with panoramic imagery captured in hundreds of cities in 20 countries across four continents. A team of Google researchers describes the technical challenges involved in capturing, processing, and serving street-level imagery on a global scale.",
"title": ""
},
{
"docid": "daac9ee402eebc650fe4f98328a7965d",
"text": "5.1. Detection Formats 475 5.2. Food Quality and Safety Analysis 477 5.2.1. Pathogens 477 5.2.2. Toxins 479 5.2.3. Veterinary Drugs 479 5.2.4. Vitamins 480 5.2.5. Hormones 480 5.2.6. Diagnostic Antibodies 480 5.2.7. Allergens 481 5.2.8. Proteins 481 5.2.9. Chemical Contaminants 481 5.3. Medical Diagnostics 481 5.3.1. Cancer Markers 481 5.3.2. Antibodies against Viral Pathogens 482 5.3.3. Drugs and Drug-Induced Antibodies 483 5.3.4. Hormones 483 5.3.5. Allergy Markers 483 5.3.6. Heart Attack Markers 484 5.3.7. Other Molecular Biomarkers 484 5.4. Environmental Monitoring 484 5.4.1. Pesticides 484 5.4.2. 2,4,6-Trinitrotoluene (TNT) 485 5.4.3. Aromatic Hydrocarbons 485 5.4.4. Heavy Metals 485 5.4.5. Phenols 485 5.4.6. Polychlorinated Biphenyls 487 5.4.7. Dioxins 487 5.5. Summary 488 6. Conclusions 489 7. Abbreviations 489 8. Acknowledgment 489 9. References 489",
"title": ""
},
{
"docid": "96d90b5e2046b4629f1625649256ecaa",
"text": "Today's smartphones are equipped with precise motion sensors like accelerometer and gyroscope, which can measure tiny motion and rotation of devices. While they make mobile applications more functional, they also bring risks of leaking users' privacy. Researchers have found that tap locations on screen can be roughly inferred from motion data of the device. They mostly utilized this side-channel for inferring short input like PIN numbers and passwords, with repeated attempts to boost accuracy. In this work, we study further for longer input inference, such as chat record and e-mail content, anything a user ever typed on a soft keyboard. Since people increasingly rely on smartphones for daily activities, their inputs directly or indirectly expose privacy about them. Thus, it is a serious threat if their input text is leaked.\n To make our attack practical, we utilize the shared memory side-channel for detecting window events and tap events of a soft keyboard. The up or down state of the keyboard helps triggering our Trojan service for collecting accelerometer and gyroscope data. Machine learning algorithms are used to roughly predict the input text from the raw data and language models are used to further correct the wrong predictions. We performed experiments on two real-life scenarios, which were writing emails and posting Twitter messages, both through mobile clients. Based on the experiments, we show the feasibility of inferring long user inputs to readable sentences from motion sensor data. By applying text mining technology on the inferred text, more sensitive information about the device owners can be exposed.",
"title": ""
},
{
"docid": "a5e960a4b20959a1b4a85e08eebab9d3",
"text": "This paper presents a new class of dual-, tri- and quad-band BPF by using proposed open stub-loaded shorted stepped-impedance resonator (OSLSSIR). The OSLSSIR consists of a two-end-shorted three-section stepped-impedance resistor (SIR) with two identical open stubs loaded at its impedance junctions. Two 50- Ω tapped lines are directly connected to two shorted sections of the SIR to serve as I/O ports. As the electrical lengths of two identical open stubs increase, many more transmission poles (TPs) and transmission zeros (TZs) can be shifted or excited within the interested frequency range. The TZs introduced by open stubs divide the TPs into multiple groups, which can be applied to design a multiple-band bandpass filter (BPF). In order to increase many more design freedoms for tuning filter performance, a high-impedance open stub and the narrow/broad side coupling are introduced as perturbations in all filters design, which can tune the even- and odd-mode TPs separately. In addition, two branches of I/O coupling and open stub-loaded shorted microstrip line are employed in tri- and quad-band BPF design. As examples, two dual-wideband BPFs, one tri-band BPF, and one quad-band BPF have been successfully developed. The fabricated four BPFs have merits of compact sizes, low insertion losses, and high band-to-band isolations. The measured results are in good agreement with the full-wave simulated results.",
"title": ""
},
{
"docid": "b2e02a1818f862357cf5764afa7fa197",
"text": "The goal of this paper is the automatic identification of characters in TV and feature film material. In contrast to standard approaches to this task, which rely on the weak supervision afforded by transcripts and subtitles, we propose a new method requiring only a cast list. This list is used to obtain images of actors from freely available sources on the web, providing a form of partial supervision for this task. In using images of actors to recognize characters, we make the following three contributions: (i) We demonstrate that an automated semi-supervised learning approach is able to adapt from the actor’s face to the character’s face, including the face context of the hair; (ii) By building voice models for every character, we provide a bridge between frontal faces (for which there is plenty of actor-level supervision) and profile (for which there is very little or none); and (iii) by combining face context and speaker identification, we are able to identify characters with partially occluded faces and extreme facial poses. Results are presented on the TV series ‘Sherlock’ and the feature film ‘Casablanca’. We achieve the state-of-the-art on the Casablanca benchmark, surpassing previous methods that have used the stronger supervision available from transcripts.",
"title": ""
},
{
"docid": "b9d25bdbb337a9d16a24fa731b6b479d",
"text": "The implementation of effective strategies to manage leaks represents an essential goal for all utilities involved with drinking water supply in order to reduce water losses affecting urban distribution networks. This study concerns the early detection of leaks occurring in small-diameter customers’ connections to water supply networks. An experimental campaign was carried out in a test bed to investigate the sensitivity of Acoustic Emission (AE) monitoring to water leaks. Damages were artificially induced on a polyethylene pipe (length 28 m, outer diameter 32 mm) at different distances from an AE transducer. Measurements were performed in both unburied and buried pipe conditions. The analysis permitted the identification of a clear correlation between three monitored parameters (namely total Hits, Cumulative Counts and Cumulative Amplitude) and the characteristics of the examined leaks.",
"title": ""
},
{
"docid": "afce201838e658aac3e18c2f26cff956",
"text": "With the current set of design tools and methods available to game designers, vast portions of the space of possible games are not currently reachable. In the past, technological advances such as improved graphics and new controllers have driven the creation of new forms of gameplay, but games have still not made great strides into new gameplay experiences. We argue that the development of innovative artificial intelligence (AI) systems plays a crucial role in the exploration of currently unreachable spaces. To aid in exploration, we suggest a practice called AI-based game design, an iterative design process that deeply integrates the affordances of an AI system within the context of game design. We have applied this process in our own projects, and in this paper we present how it has pushed the boundaries of current game genres and experiences, as well as discuss the future AI-based game design.",
"title": ""
},
{
"docid": "37e552e4352cd5f8c76dcefd856e0fc8",
"text": "Following the increasing popularity of mobile ecosystems, cybercriminals have increasingly targeted them, designing and distributing malicious apps that steal information or cause harm to the device’s owner. Aiming to counter them, detection techniques based on either static or dynamic analysis that model Android malware, have been proposed. While the pros and cons of these analysis techniques are known, they are usually compared in the context of their limitations e.g., static analysis is not able to capture runtime behaviors, full code coverage is usually not achieved during dynamic analysis, etc. Whereas, in this paper, we analyze the performance of static and dynamic analysis methods in the detection of Android malware and attempt to compare them in terms of their detection performance, using the same modeling approach. To this end, we build on MAMADROID, a state-of-the-art detection system that relies on static analysis to create a behavioral model from the sequences of abstracted API calls. Then, aiming to apply the same technique in a dynamic analysis setting, we modify CHIMP, a platform recently proposed to crowdsource human inputs for app testing, in order to extract API calls’ sequences from the traces produced while executing the app on a CHIMP virtual device. We call this system AUNTIEDROID and instantiate it by using both automated (Monkey) and user-generated inputs. We find that combining both static and dynamic analysis yields the best performance, with F -measure reaching 0.92. We also show that static analysis is at least as effective as dynamic analysis, depending on how apps are stimulated during execution, and, finally, investigate the reasons for inconsistent misclassifications across methods.",
"title": ""
},
{
"docid": "eb7eb6777a68fd594e2e94ac3cba6be9",
"text": "Cellulosic plant material represents an as-of-yet untapped source of fermentable sugars for significant industrial use. Many physio-chemical structural and compositional factors hinder the enzymatic digestibility of cellulose present in lignocellulosic biomass. The goal of any pretreatment technology is to alter or remove structural and compositional impediments to hydrolysis in order to improve the rate of enzyme hydrolysis and increase yields of fermentable sugars from cellulose or hemicellulose. These methods cause physical and/or chemical changes in the plant biomass in order to achieve this result. Experimental investigation of physical changes and chemical reactions that occur during pretreatment is required for the development of effective and mechanistic models that can be used for the rational design of pretreatment processes. Furthermore, pretreatment processing conditions must be tailored to the specific chemical and structural composition of the various, and variable, sources of lignocellulosic biomass. This paper reviews process parameters and their fundamental modes of action for promising pretreatment methods.",
"title": ""
},
{
"docid": "036cbf58561de8bfa01ddc4fa8d7b8f2",
"text": "The purpose of this paper is to discover a semi-optimal set of trading rules and to investigate its effectiveness as applied to Egyptian Stocks. The aim is to mix different categories of technical trading rules and let an automatic evolution process decide which rules are to be used for particular time series. This difficult task can be achieved by using genetic algorithms (GA's), they permit the creation of artificial experts taking their decisions from an optimal subset of the a given set of trading rules. The GA's based on the survival of the fittest, do not guarantee a global optimum but they are known to constitute an effective approach in optimizing non-linear functions. Selected liquid stocks are tested and GA trading rules were compared with other conventional and well known technical analysis rules. The Proposed GA system showed clear better average profit and in the same high sharpe ratio, which indicates not only good profitability but also better risk-reward trade-off",
"title": ""
}
] |
scidocsrr
|
5009c4e6aecd17bd7a2e9b3f2f74a0db
|
Iterative Entity Alignment via Joint Knowledge Embeddings
|
[
{
"docid": "99d9dcef0e4441ed959129a2a705c88e",
"text": "Wikipedia has grown to a huge, multi-lingual source of encyclopedic knowledge. Apart from textual content, a large and everincreasing number of articles feature so-called infoboxes, which provide factual information about the articles’ subjects. As the different language versions evolve independently, they provide different information on the same topics. Correspondences between infobox attributes in different language editions can be leveraged for several use cases, such as automatic detection and resolution of inconsistencies in infobox data across language versions, or the automatic augmentation of infoboxes in one language with data from other language versions. We present an instance-based schema matching technique that exploits information overlap in infoboxes across different language editions. As a prerequisite we present a graph-based approach to identify articles in different languages representing the same real-world entity using (and correcting) the interlanguage links in Wikipedia. To account for the untyped nature of infobox schemas, we present a robust similarity measure that can reliably quantify the similarity of strings with mixed types of data. The qualitative evaluation on the basis of manually labeled attribute correspondences between infoboxes in four of the largest Wikipedia editions demonstrates the effectiveness of the proposed approach. 1. Entity and Attribute Matching across Wikipedia Languages Wikipedia is a well-known public encyclopedia. While most of the information contained in Wikipedia is in textual form, the so-called infoboxes provide semi-structured, factual information. They are displayed as tables in many Wikipedia articles and state basic facts about the subject. There are different templates for infoboxes, each targeting a specific category of articles and providing fields for properties that are relevant for the respective subject type. For example, in the English Wikipedia, there is a class of infoboxes about companies, one to describe the fundamental facts about countries (such as their capital and population), one for musical artists, etc. However, each of the currently 281 language versions1 defines and maintains its own set of infobox classes with their own set of properties, as well as providing sometimes different values for corresponding attributes. Figure 1 shows extracts of the English and German infoboxes for the city of Berlin. The arrows indicate matches between properties. It is already apparent that matching purely based on property names is futile: The terms Population density and Bevölkerungsdichte or Governing parties and Reg. Parteien have no textual similarity. However, their property values are more revealing: <3,857.6/km2> and <3.875 Einw. je km2> or <SPD/Die Linke> and <SPD und Die Linke> have a high textual similarity, respectively. Email addresses: [email protected] (Daniel Rinser), [email protected] (Dustin Lange), [email protected] (Felix Naumann) 1as of March 2011 Our overall goal is to automatically find a mapping between attributes of infobox templates across different language versions. Such a mapping can be valuable for several different use cases: First, it can be used to increase the information quality and quantity in Wikipedia infoboxes, or at least help the Wikipedia communities to do so. Inconsistencies among the data provided by different editions for corresponding attributes could be detected automatically. For example, the infobox in the English article about Germany claims that the population is 81,799,600, while the German article specifies a value of 81,768,000 for the same country. Detecting such conflicts can help the Wikipedia communities to increase consistency and information quality across language versions. Further, the detected inconsistencies could be resolved automatically by fusing the data in infoboxes, as proposed by [1]. Finally, the coverage of information in infoboxes could be increased significantly by completing missing attribute values in one Wikipedia edition with data found in other editions. An infobox template does not describe a strict schema, so that we need to collect the infobox template attributes from the template instances. For the purpose of this paper, an infobox template is determined by the set of attributes that are mentioned in any article that reference the template. The task of matching attributes of corresponding infoboxes across language versions is a specific application of schema matching. Automatic schema matching is a highly researched topic and numerous different approaches have been developed for this task as surveyed in [2] and [3]. Among these, schema-level matchers exploit attribute labels, schema constraints, and structural similarities of the schemas. However, in the setting of Wikipedia infoboxes these Preprint submitted to Information Systems October 19, 2012 Figure 1: A mapping between the English and German infoboxes for Berlin techniques are not useful, because infobox definitions only describe a rather loose list of supported properties, as opposed to a strict relational or XML schema. Attribute names in infoboxes are not always sound, often cryptic or abbreviated, and the exact semantics of the attributes are not always clear from their names alone. Moreover, due to our multi-lingual scenario, attributes are labeled in different natural languages. This latter problem might be tackled by employing bilingual dictionaries, if the previously mentioned issues were solved. Due to the flat nature of infoboxes and their lack of constraints or types, other constraint-based matching approaches must fail. On the other hand, there are instance-based matching approaches, which leverage instance data of multiple data sources. Here, the basic assumption is that similarity of the instances of the attributes reflects the similarity of the attributes. To assess this similarity, instance-based approaches usually analyze the attributes of each schema individually, collecting information about value patterns and ranges, amongst others, such as in [4]. A different, duplicate-based approach exploits information overlap across data sources [5]. The idea there is to find two representations of same real-world objects (duplicates) and then suggest mappings between attributes that have the same or similar values. This approach has one important requirement: The data sources need to share a sufficient amount of common instances (or tuples, in a relational setting), i.e., instances describing the same real-world entity. Furthermore, the duplicates either have to be known in advance or have to be discovered despite a lack of knowledge of corresponding attributes. The approach presented in this article is based on such duplicate-based matching. Our approach consists of three steps: Entity matching, template matching, and attribute matching. The process is visualized in Fig. 2. (1) Entity matching: First, we find articles in different language versions that describe the same real-world entity. In particular, we make use of the crosslanguage links that are present for most Wikipedia articles and provide links between same entities across different language versions. We present a graph-based approach to resolve conflicts in the linking information. (2) Template matching: We determine a cross-lingual mapping between infobox templates by analyzing template co-occurrences in the language versions. (3) Attribute matching: The infobox attribute values of the corresponding articles are compared to identify matching attributes across the language versions, assuming that the values of corresponding attributes are highly similar for the majority of article pairs. As a first step we analyze the quality of Wikipedia’s interlanguage links in Sec. 2. We show how to use those links to create clusters of semantically equivalent entities with only one entity from each language in Sec. 3. This entity matching approach is evaluated in Sec. 4. In Sec. 5, we show how a crosslingual mapping between infobox templates can be established. The infobox attribute matching approach is described in Sec. 6 and in turn evaluated in Sec. 7. Related work in the areas of ILLs, concept identification, and infobox attribute matching is discussed in Sec. 8. Finally, Sec. 9 draws conclusions and discusses future work. 2. Interlanguage Links Our basic assumption is that there is a considerable amount of information overlap across the different Wikipedia language editions. Our infobox matching approach presented later requires mappings between articles in different language editions",
"title": ""
}
] |
[
{
"docid": "c2c5f0f8b4647c651211b50411382561",
"text": "Obesity is a multifactorial disease that results from a combination of both physiological, genetic, and environmental inputs. Obesity is associated with adverse health consequences, including T2DM, cardiovascular disease, musculoskeletal disorders, obstructive sleep apnea, and many types of cancer. The probability of developing adverse health outcomes can be decreased with maintained weight loss of 5% to 10% of current body weight. Body mass index and waist circumference are 2 key measures of body fat. A wide variety of tools are available to assess obesity-related risk factors and guide management.",
"title": ""
},
{
"docid": "eca2bfe1b96489e155e19d02f65559d6",
"text": "• Oracle experiment: to understand how well these attributes, when used together, can explain persuasiveness, we train 3 linear SVM regressors, one for each component type, to score an arguments persuasiveness using gold attribute’s as features • Two human annotators who were both native speakers of English were first familiarized with the rubrics and definitions and then trained on five essays • 30 essays were doubly annotated for computing inter-annotator agreement • Each of the remaining essays was annotated by one of the annotators • Score/Class distributions by component type: Give me More Feedback: Annotating Argument Persusiveness and Related Attributes in Student Essays",
"title": ""
},
{
"docid": "4d894156dd1ad6864eb6b47ed6bee085",
"text": "Preference learning is a fundamental problem in various smart computing applications such as personalized recommendation. Collaborative filtering as a major learning technique aims to make use of users’ feedback, for which some recent works have switched from exploiting explicit feedback to implicit feedback. One fundamental challenge of leveraging implicit feedback is the lack of negative feedback, because there is only some observed relatively “positive” feedback available, making it difficult to learn a prediction model. In this paper, we propose a new and relaxed assumption of pairwise preferences over item-sets, which defines a user’s preference on a set of items (item-set) instead of on a single item only. The relaxed assumption can give us more accurate pairwise preference relationships. With this assumption, we further develop a general algorithm called CoFiSet (collaborative filtering via learning pairwise preferences over item-sets), which contains four variants, CoFiSet(SS), CoFiSet(MOO), CoFiSet(MOS) and CoFiSet(MSO), representing “Set vs. Set,” “Many ‘One vs. One’,” “Many ‘One vs. Set”’ and “Many ‘Set vs. One”’ pairwise comparisons, respectively. Experimental results show that our CoFiSet(MSO) performs better than several state-of-the-art methods on five ranking-oriented evaluation metrics on three real-world data sets.",
"title": ""
},
{
"docid": "44d468d53b66f719e569ea51bb94f6cb",
"text": "The paper gives an overview on the developments at the German Aerospace Center DLR towards anthropomorphic robots which not only tr y to approach the force and velocity performance of humans, but also have simi lar safety and robustness features based on a compliant behaviour. We achieve thi s compliance either by joint torque sensing and impedance control, or, in our newes t systems, by compliant mechanisms (so called VIA variable impedance actuators), whose intrinsic compliance can be adjusted by an additional actuator. Both appr o ches required highly integrated mechatronic design and advanced, nonlinear con trol a d planning strategies, which are presented in this paper.",
"title": ""
},
{
"docid": "306a33d3ad0f70eb6fa2209c63747a6f",
"text": "Omnidirectional cameras have a wide field of view and are thus used in many robotic vision tasks. An omnidirectional view may be acquired by a fisheye camera which provides a full image compared to catadioptric visual sensors and do not increase the size and the weakness of the imaging system with respect to perspective cameras. We prove that the unified model for catadioptric systems can model fisheye cameras with distortions directly included in its parameters. This unified projection model consists on a projection onto a virtual unitary sphere, followed by a perspective projection onto an image plane. The validity of this assumption is discussed and compared with other existing models. Calibration and partial Euclidean reconstruction results help to confirm the validity of our approach. Finally, an application to the visual servoing of a mobile robot is presented and experimented.",
"title": ""
},
{
"docid": "0d2ddb448c01172e53f19d9d5ac39f21",
"text": "Malicious Android applications are currently the biggest threat in the scope of mobile security. To cope with their exponential growth and with their deceptive and hideous behaviors, static analysis signature based approaches are not enough to timely detect and tackle brand new threats such as polymorphic and composition malware. This work presents BRIDEMAID, a novel framework for analysis of Android apps' behavior, which exploits both a static and dynamic approach to detect malicious apps directly on mobile devices. The static analysis is based on n-grams matching to statically recognize malicious app execution patterns. The dynamic analysis is instead based on multi-level monitoring of device, app and user behavior to detect and prevent at runtime malicious behaviors. The framework has been tested against 2794 malicious apps reporting a detection accuracy of 99,7% and a negligible false positive rate, tested on a set of 10k genuine apps.",
"title": ""
},
{
"docid": "8a21ff7f3e4d73233208d5faa70eb7ce",
"text": "Achieving robustness and energy efficiency in nanoscale CMOS process technologies is made challenging due to the presence of process, temperature, and voltage variations. Traditional fault-tolerance techniques such as N-modular redundancy (NMR) employ deterministic error detection and correction, e.g., majority voter, and tend to be power hungry. This paper proposes soft NMR that nontrivially extends NMR by consciously exploiting error statistics caused by nanoscale artifacts in order to design robust and energy-efficient systems. In contrast to conventional NMR, soft NMR employs Bayesian detection techniques in the voter. Soft voter algorithms are obtained through optimization of appropriate application aware cost functions. Analysis indicates that, on average, soft NMR outperforms conventional NMR. Furthermore, unlike NMR, in many cases, soft NMR is able to generate a correct output even when all N replicas are in error. This increase in robustness is then traded-off through voltage scaling to achieve energy efficiency. The design of a discrete cosine transform (DCT) image coder is employed to demonstrate the benefits of the proposed technique. Simulations in a commercial 45 nm, 1.2 V, CMOS process show that soft NMR provides up to 10× improvement in robustness, and 35 percent power savings over conventional NMR.",
"title": ""
},
{
"docid": "779cc0258ae35fd3b6d70c2a62a1a857",
"text": "Opinion mining and sentiment analysis have become popular in linguistic resource rich languages. Opinions for such analysis are drawn from many forms of freely available online/electronic sources, such as websites, blogs, news re-ports and product reviews. But attention received by less resourced languages is significantly less. This is because the success of any opinion mining algorithm depends on the availability of resources, such as special lexicon and WordNet type tools. In this research, we implemented a less complicated but an effective approach that could be used to classify comments in less resourced languages. We experimented the approach for use with Sinhala Language where no such opinion mining or sentiment analysis has been carried out until this day. Our algorithm gives significantly promising results for analyzing sentiments in Sinhala for the first time.",
"title": ""
},
{
"docid": "f420d1dc56ab1d78533ebff9754fbcce",
"text": "The purpose of this study was to survey the mental toughness and physical activity among student university of Tabriz. Baecke physical activity questionnaire, mental thoughness48 and demographic questionnaire was distributed between students. 355 questionnaires were collected. Correlation, , multiple ANOVA and independent t-test was used for analyzing the hypotheses. The result showed that there was significant relationship between some of physical activity and mental toughness subscales. Two groups active and non-active were compared to find out the mental toughness differences, Student who obtained the 75% upper the physical activity questionnaire was active (n=97) and Student who obtained the 25% under the physical activity questionnaire was inactive group (n=95).The difference between active and non-active physically people showed that active student was significantly mentally toughness. It is expected that changes in physical activity levels significantly could be evidence of mental toughness changes, it should be noted that the other variables should not be ignored.",
"title": ""
},
{
"docid": "2361e70109a3595241b2cdbbf431659d",
"text": "There is a trend in the scientific community to model and solve complex optimization problems by employing natural metaphors. This is mainly due to inefficiency of classical optimization algorithms in solving larger scale combinatorial and/or highly non-linear problems. The situation is not much different if integer and/or discrete decision variables are required in most of the linear optimization models as well. One of the main characteristics of the classical optimization algorithms is their inflexibility to adapt the solution algorithm to a given problem. Generally a given problem is modelled in such a way that a classical algorithm like simplex algorithm can handle it. This generally requires making several assumptions which might not be easy to validate in many situations. In order to overcome these limitations more flexible and adaptable general purpose algorithms are needed. It should be easy to tailor these algorithms to model a given problem as close as to reality. Based on this motivation many nature inspired algorithms were developed in the literature like genetic algorithms, simulated annealing and tabu search. It has also been shown that these algorithms can provide far better solutions in comparison to classical algorithms. A branch of nature inspired algorithms which are known as swarm intelligence is focused on insect behaviour in order to develop some meta-heuristics which can mimic insect's problem solution abilities. Ant colony optimization, particle swarm optimization, wasp nets etc. are some of the well known algorithms that mimic insect behaviour in problem modelling and solution. Artificial Bee Colony (ABC) is a relatively new member of swarm intelligence. ABC tries to model natural behaviour of real honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new intelligent search algorithms. In this chapter an extensive review of work on artificial bee algorithms is given. Afterwards, development of an ABC algorithm for solving generalized assignment problem which is known as NP-hard problem is presented in detail along with some comparisons. It is a well known fact that classical optimization techniques impose several limitations on solving mathematical programming and operational research models. This is mainly due to inherent solution mechanisms of these techniques. Solution strategies of classical optimization algorithms are generally depended on the type of objective and constraint",
"title": ""
},
{
"docid": "8b054ce1961098ec9c7d66db33c53abd",
"text": "This paper addresses the problem of single image depth estimation (SIDE), focusing on improving the accuracy of deep neural network predictions. In a supervised learning scenario, the quality of predictions is intrinsically related to the training labels, which guide the optimization process. For indoor scenes, structured-light-based depth sensors (e.g. Kinect) are able to provide dense, albeit short-range, depth maps. On the other hand, for outdoor scenes, LiDARs are still considered the standard sensor, which comparatively provide much sparser measurements, especially in areas further away. Rather than modifying the neural network architecture to deal with sparse depth maps, this article introduces a novel densification method for depth maps, using the Hilbert Maps framework. A continuous occupancy map is produced based on 3D points from LiDAR scans, and the resulting reconstructed surface is projected into a 2D depth map with arbitrary resolution. Experiments conducted with various subsets of the KITTI dataset show a significant improvement produced by the proposed Sparse-to-Continuous technique, without the introduction of extra information into the training stage.",
"title": ""
},
{
"docid": "53821da1274fd420fe0f7eeba024b95d",
"text": "An empirical study was performed to train naive subjects in the use of a prototype Boolean logic-based information retrieval system on a bibliographic database. Subjects were undergraduates with little or no prior computing experience. Subjects trained with a conceptual model of the system performed better than subjects trained with procedural instructions, but only on complex, problem-solving tasks. Performance was equal on simple tasks. Differences in patterns of interaction with the system (based on a stochastic process model) showed parallel results. Most subjects were able to articulate some description of the system's operation, but few articulated a model similar to the card catalog analogy provided in training. Eleven of 43 subjects were unable to achieve minimal competency in system use. The failure rate was equal between training conditions and genders; the only differences found between those passing and failing the benchmark test were academic major and in frequency of library use.",
"title": ""
},
{
"docid": "7f57322b6e998d629d1a67cd5fb28da9",
"text": "Background: We recently described “Author-ity,” a model for estimating the probability that two articles in MEDLINE, sharing the same author name, were written by the same individual. Features include shared title words, journal name, coauthors, medical subject headings, language, affiliations, and author name features (middle initial, suffix, and prevalence in MEDLINE). Here we test the hypothesis that the Author-ity model will suffice to disambiguate author names for the vast majority of articles in MEDLINE. Methods: Enhancements include: (a) incorporating first names and their variants, email addresses, and correlations between specific last names and affiliation words; (b) new methods of generating large unbiased training sets; (c) new methods for estimating the prior probability; (d) a weighted least squares algorithm for correcting transitivity violations; and (e) a maximum likelihood based agglomerative algorithm for computing clusters of articles that represent inferred author-individuals. Results: Pairwise comparisons were computed for all author names on all 15.3 million articles in MEDLINE (2006 baseline), that share last name and first initial, to create Author-ity 2006, a database that has each name on each article assigned to one of 6.7 million inferred author-individual clusters. Recall is estimated at ∼98.8%. Lumping (putting two different individuals into the same cluster) affects ∼0.5% of clusters, whereas splitting (assigning articles written by the same individual to >1 cluster) affects ∼2% of articles. Impact: The Author-ity model can be applied generally to other bibliographic databases. Author name disambiguation allows information retrieval and data integration to become person-centered, not just document-centered, setting the stage for new data mining and social network tools that will facilitate the analysis of scholarly publishing and collaboration behavior. Availability: The Author-ity 2006 database is available for nonprofit academic research, and can be freely queried via http://arrowsmith.psych.uic.edu.",
"title": ""
},
{
"docid": "6ee2ee4a1cff7b1ddb8e5e1e2faf3aa5",
"text": "An array of four uniform half-width microstrip leaky-wave antennas (MLWAs) was designed and tested to obtain maximum radiation in the boresight direction. To achieve this, uniform MLWAs are placed at 90 ° and fed by a single probe at the center. Four beams from four individual branches combine to form the resultant directive beam. The measured matched bandwidth of the array is 300 MHz (3.8-4.1 GHz). Its beam toward boresight occurs over a relatively wide 6.4% (3.8-4.05 GHz) band. The peak measured boresight gain of the array is 10.1 dBi, and its variation within the 250-MHz boresight radiation band is only 1.7 dB.",
"title": ""
},
{
"docid": "0879399fcb38c103a0e574d6d9010215",
"text": "We present a content-based method for recommending citations in an academic paper draft. We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations. Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process. Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR. We show empirically that, although adding metadata improves the performance on standard metrics, it favors selfcitations which are less useful in a citation recommendation setup. We release an online portal for citation recommendation based on our method,1 and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.",
"title": ""
},
{
"docid": "c24550119d4251d6d7ce1219b8aa0ee4",
"text": "This article considers the delivery of efficient and effective dental services for patients whose disability and/or medical condition may not be obvious and which consequently can present a hidden challenge in the dental setting. Knowing that the patient has a particular condition, what its features are and how it impacts on dental treatment and oral health, and modifying treatment accordingly can minimise the risk of complications. The taking of a careful medical history that asks the right questions in a manner that encourages disclosure is key to highlighting hidden hazards and this article offers guidance for treating those patients who have epilepsy, latex sensitivity, acquired or inherited bleeding disorders and patients taking oral or intravenous bisphosphonates.",
"title": ""
},
{
"docid": "2393fc67fdca6b98695d0940fba19ca3",
"text": "Evaluation of network security is an essential step in securing any network. This evaluation can help security professionals in making optimal decisions about how to design security countermeasures, to choose between alternative security architectures, and to systematically modify security configurations in order to improve security. However, the security of a network depends on a number of dynamically changing factors such as emergence of new vulnerabilities and threats, policy structure and network traffic. Identifying, quantifying and validating these factors using security metrics is a major challenge in this area. In this paper, we propose a novel security metric framework that identifies and quantifies objectively the most significant security risk factors, which include existing vulnerabilities, historical trend of vulnerability of the remotely accessible services, prediction of potential vulnerabilities for any general network service and their estimated severity and finally policy resistance to attack propagation within the network. We then describe our rigorous validation experiments using real- life vulnerability data of the past 6 years from National Vulnerability Database (NVD) [10] to show the high accuracy and confidence of the proposed metrics. Some previous works have considered vulnerabilities using code analysis. However, as far as we know, this is the first work to study and analyze these metrics for network security evaluation using publicly available vulnerability information and security policy configuration.",
"title": ""
},
{
"docid": "99cd180d0bb08e6360328b77219919c1",
"text": "In this paper, we describe our approach to RecSys 2015 challenge problem. Given a dataset of item click sessions, the problem is to predict whether a session results in a purchase and which items are purchased if the answer is yes.\n We define a simpler analogous problem where given an item and its session, we try to predict the probability of purchase for the given item. For each session, the predictions result in a set of purchased items or often an empty set.\n We apply monthly time windows over the dataset. For each item in a session, we engineer features regarding the session, the item properties, and the time window. Then, a balanced random forest classifier is trained to perform predictions on the test set.\n The dataset is particularly challenging due to privacy-preserving definition of a session, the class imbalance problem, and the volume of data. We report our findings with respect to feature engineering, the choice of sampling schemes, and classifier ensembles. Experimental results together with benefits and shortcomings of the proposed approach are discussed. The solution is efficient and practical in commodity computers.",
"title": ""
},
{
"docid": "bb404a57964fcd5500006e039ba2b0dd",
"text": "The needs of the child are paramount. The clinician’s first task is to diagnose the cause of symptoms and signs whether accidental, inflicted or the result of an underlying medical condition. Where abuse is diagnosed the task is to safeguard the child and treat the physical and psychological effects of maltreatment. A child is one who has not yet reached his or her 18th birthday. Child abuse is any action by another person that causes significant harm to a child or fails to meet a basic need. It involves acts of both commission and omission with effects on the child’s physical, developmental, and psychosocial well-being. The vast majority of carers from whatever walk of life, love, nurture and protect their children. A very few, in a momentary loss of control in an otherwise caring parent, cause much regretted injury. An even smaller number repeatedly maltreat their children in what becomes a pattern of abuse. One parent may harm, the other may fail to protect by omitting to seek help. Child abuse whether physical or psychological is unlawful.",
"title": ""
}
] |
scidocsrr
|
b10bd07f3a3c5cb0ff56d279dac00f02
|
Modelling IT projects success with Fuzzy Cognitive Maps
|
[
{
"docid": "447c36d34216b8cb890776248d9cc010",
"text": "Fuzzy cognitive maps (FCMs) are fuzzy-graph structures for representing causal reasoning. Their fuzziness allows hazy degrees of causality between hazy causal objects (concepts). Their graph structure allows systematic causal propagation, in particular forward and backward chaining, and it allows knowledge bases to be grown by connecting different FCMs. FCMs are especially applicable to soft knowledge domains and several example FCMs are given. Causality is represented as a fuzzy relation on causal concepts. A fuzzy causal algebra for governing causal propagation on FCMs is developed. FCM matrix representation and matrix operations are presented in the Appendix.",
"title": ""
}
] |
[
{
"docid": "347509d68f6efd4da747a7a3e704a9a2",
"text": "Stack Overflow is widely regarded as the most popular Community driven Question Answering (CQA) website for programmers. Questions posted on Stack Overflow which are not related to programming topics, are marked as `closed' by experienced users and community moderators. A question can be `closed' for five reasons -- duplicate, off-topic, subjective, not a real question and too localized. In this work, we present the first study of `closed' questions on Stack Overflow. We download 4 years of publicly available data which contains 3.4 Million questions. We first analyze and characterize the complete set of 0.1 Million `closed' questions. Next, we use a machine learning framework and build a predictive model to identify a `closed' question at the time of question creation.\n One of our key findings is that despite being marked as `closed', subjective questions contain high information value and are very popular with the users. We observe an increasing trend in the percentage of closed questions over time and find that this increase is positively correlated to the number of newly registered users. In addition, we also see a decrease in community participation to mark a `closed' question which has led to an increase in moderation job time. We also find that questions closed with the Duplicate and Off Topic labels are relatively more prone to reputation gaming. Our analysis suggests broader implications for content quality maintenance on CQA websites. For the `closed' question prediction task, we make use of multiple genres of feature sets based on - user profile, community process, textual style and question content. We use a state-of-art machine learning classifier based on an ensemble framework and achieve an overall accuracy of 70.3%. Analysis of the feature space reveals that `closed' questions are relatively less informative and descriptive than non-`closed' questions. To the best of our knowledge, this is the first experimental study to analyze and predict `closed' questions on Stack Overflow.",
"title": ""
},
{
"docid": "5b0eaf636d6d8cf0523e3f00290b780f",
"text": "Toward materializing the recently identified potential of cognitive neuroscience for IS research (Dimoka, Pavlou and Davis 2007), this paper demonstrates how functional neuroimaging tools can enhance our understanding of IS theories. Specifically, this study aims to uncover the neural mechanisms that underlie technology adoption by identifying the brain areas activated when users interact with websites that differ on their level of usefulness and ease of use. Besides localizing the neural correlates of the TAM constructs, this study helps understand their nature and dimensionality, as well as uncover hidden processes associated with intentions to use a system. The study also identifies certain technological antecedents of the TAM constructs, and shows that the brain activations associated with perceived usefulness and perceived ease of use predict selfreported intentions to use a system. The paper concludes by discussing the study’s implications for underscoring the potential of functional neuroimaging for IS research and the TAM literature.",
"title": ""
},
{
"docid": "7f070d85f4680a2b88d3b530dff0cfc5",
"text": "An extensive data search among various types of developmental and evolutionary sequences yielded a `four quadrant' model of consciousness and its development (the four quadrants being intentional, behavioural, cultural, and social). Each of these dimensions was found to unfold in a sequence of at least a dozen major stages or levels. Combining the four quadrants with the dozen or so major levels in each quadrant yields an integral theory of consciousness that is quite comprehensive in its nature and scope. This model is used to indicate how a general synthesis and integration of twelve of the most influential schools of consciousness studies can be effected, and to highlight some of the most significant areas of future research. The conclusion is that an `all-quadrant, all-level' approach is the minimum degree of sophistication that we need into order to secure anything resembling a genuinely integral theory of consciousness.",
"title": ""
},
{
"docid": "a33f862d0b7dfde7b9f18aa193db9acf",
"text": "Phytoremediation is an important process in the removal of heavy metals and contaminants from the soil and the environment. Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Phytoremediation in phytoextraction is a major technique. In this process is the use of plants or algae to remove contaminants in the soil, sediment or water in the harvesting of plant biomass. Heavy metal is generally known set of elements with atomic mass (> 5 gcm -3), particularly metals such as exchange of cadmium, lead and mercury. Between different pollutant cadmium (Cd) is the most toxic and plant and animal heavy metals. Mustard (Brassica juncea L.) and Sunflower (Helianthus annuus L.) are the plant for the production of high biomass and rapid growth, and it seems that the appropriate species for phytoextraction because it can compensate for the low accumulation of cadmium with a much higher biomass yield. To use chelators, such as acetic acid, ethylene diaminetetraacetic acid (EDTA), and to increase the solubility of metals in the soil to facilitate easy availability indiscernible and the absorption of the plant from root leg in vascular plants. *Corresponding Author: Awais Shakoor [email protected] Journal of Biodiversity and Environmental Sciences (JBES) ISSN: 2220-6663 (Print) 2222-3045 (Online) Vol. 10, No. 3, p. 88-98, 2017 http://www.innspub.net J. Bio. Env. Sci. 2017 89 | Shakoor et al. Introduction Phytoremediation consists of Greek and words of \"station\" and Latin remedium plants, which means \"rebalancing\" describes the treatment of environmental problems treatment (biological) through the use of plants that mitigate the environmental problem without digging contaminated materials and disposed of elsewhere. Controlled by the plant interactions with groundwater and organic and inorganic contaminated materials in specific locations to achieve therapeutic targets molecules site application (Landmeyer, 2011). Phytoremediation is the use of green plants to remove contaminants from the environment or render them harmless. The technology that uses plants to\" green space \"of heavy metals in the soil through the roots. While vacuum cleaners and you should be able to withstand and survive high levels of heavy metals in the soil unique plants (Baker, 2000). The main result in increasing the population and more industrialization are caused water and soil contamination that is harmful for environment as well as human health. In the whole world, contamination in the soil by heavy metals has become a very serious issue. So, removal of these heavy metals from the soil is very necessary to protect the soil and human health. Both inorganic and organic contaminants, like petroleum, heavy metals, agricultural waste, pesticide and fertilizers are the main source that deteriorate the soil health (Chirakkara et al., 2016). Heavy metals have great role in biological system, so we can divide into two groups’ essentials and non essential. Those heavy metals which play a vital role in biochemical and physiological function in some living organisms are called essential heavy metals, like zinc (Zn), nickel (Ni) and cupper (Cu) (Cempel and Nikel, 2006). In some living organisms, heavy metals don’t play any role in biochemical as well as physiological functions are called non essential heavy metals, such as mercury (Hg), lead (Pb), arsenic (As), and Cadmium (Cd) (Dabonne et al., 2010). Cadmium (Cd) is consider as a non essential heavy metal that is more toxic at very low concentration as compare to other non essential heavy metals. It is toxic to plant, human and animal health. Cd causes serious diseases in human health through the food chain (Rafiq et al., 2014). So, removal of Cd from the soil is very important problem to overcome these issues (Neilson and Rajakaruna, 2015). Several methods are used to remove the Cd from the soil, such as physical, chemical and physiochemical to increase the soil pH (Liu et al., 2015). The main source of Cd contamination in the soil and environment is automobile emissions, batteries and commercial fertilizers (Liu et al., 2015). Phytoremediation is a promising technique that is used in removing the heavy metals form the soil (Ma et al., 2011). Plants update the heavy metals through the root and change the soil properties which are helpful in increasing the soil fertility (Mench et al., 2009). Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Plants also help prevent wind and rain, groundwater and implementation of pollution off site to other areas. Phytoremediation works best in locations with low to moderate amounts of pollution. Plants absorb harmful chemicals from the soil when the roots take in water and nutrients from contaminated soils, streams and groundwater. Once inside the plant and chemicals can be stored in the roots, stems, or leaves. Change of less harmful chemicals within the plant. Or a change in the gases that are released into the air as a candidate plant Agency (US Environmental Protection, 2001). Phytoremediation is the direct use of living green plants and minutes to stabilize or reduce pollution in soil, sludge, sediment, surface water or groundwater bodies with low concentrations of pollutants a large clean space and shallow depths site offers favorable treatment plant (associated with US Environmental Protection Agency 0.2011) circumstances. Phytoremediation is the use of plants for the treatment of contaminated soil sites and sediments J. Bio. Env. Sci. 2017 90 | Shakoor et al. and water. It is best applied at sites of persistent organic pollution with shallow, nutrient, or metal. Phytoremediation is an emerging technology for contaminated sites is attractive because of its low cost and versatility (Schnoor, 1997). Contaminated soils on the site using the processing plants. Phytoremediation is a plant that excessive accumulation of metals in contaminated soils in growth (National Research Council, 1997). Phytoremediation to facilitate the concentration of pollutants in contaminated soil, water or air is composed, and plants able to contain, degrade or eliminate metals, pesticides, solvents, explosives, crude oil and its derivatives, and other contaminants in the media that contain them. Phytoremediation have several techniques and these techniques depend on different factors, like soil type, contaminant type, soil depth and level of ground water. Special operation situations and specific technology applied at the contaminated site (Hyman and Dupont 2001). Techniques of phytoremediation Different techniques are involved in phytoremediation, such as phytoextraction, phytostabilisation, phytotransformation, phytostimulation, phytovolatilization, and rhizofiltration. Phytoextraction Phytoextraction is also called phytoabsorption or phytoaccumulation, in this technique heavy metals are removed by up taking through root form the water and soil environment, and accumulated into the shoot part (Rafati et al., 2011). Phytostabilisation Phytostabilisation is also known as phytoimmobilization. In this technique different type of plants are used for stabilization the contaminants from the soil environment (Ali et al., 2013). By using this technique, the bioavailability and mobility of the different contaminants are reduced. So, this technique is help to avoiding their movement into food chain as well as into ground water (Erakhrumen, 2007). Nevertheless, Phytostabilisation is the technique by which movement of heavy metals can be stop but its not permanent solution to remove the contamination from the soil. Basically, phytostabilisation is the management approach for inactivating the potential of toxic heavy metals form the soil environment contaminants (Vangronsveld et al., 2009).",
"title": ""
},
{
"docid": "d0253bb3efe714e6a34e8dd5fc7dcf81",
"text": "ICN has received a lot of attention in recent years, and is a promising approach for the Future Internet design. As multimedia is the dominating traffic in today's and (most likely) the Future Internet, it is important to consider this type of data transmission in the context of ICN. In particular, the adaptive streaming of multimedia content is a promising approach for usage within ICN, as the client has full control over the streaming session and has the possibility to adapt the multimedia stream to its context (e.g. network conditions, device capabilities), which is compatible with the paradigms adopted by ICN. In this article we investigate the implementation of adaptive multimedia streaming within networks adopting the ICN approach. In particular, we present our approach based on the recently ratified ISO/IEC MPEG standard Dynamic Adaptive Streaming over HTTP and the ICN representative Content-Centric Networking, including baseline evaluations and open research challenges.",
"title": ""
},
{
"docid": "73267467deec2701d6628a0d3572132e",
"text": "Neuromyelitis optica (NMO) is an inflammatory CNS syndrome distinct from multiple sclerosis (MS) that is associated with serum aquaporin-4 immunoglobulin G antibodies (AQP4-IgG). Prior NMO diagnostic criteria required optic nerve and spinal cord involvement but more restricted or more extensive CNS involvement may occur. The International Panel for NMO Diagnosis (IPND) was convened to develop revised diagnostic criteria using systematic literature reviews and electronic surveys to facilitate consensus. The new nomenclature defines the unifying term NMO spectrum disorders (NMOSD), which is stratified further by serologic testing (NMOSD with or without AQP4-IgG). The core clinical characteristics required for patients with NMOSD with AQP4-IgG include clinical syndromes or MRI findings related to optic nerve, spinal cord, area postrema, other brainstem, diencephalic, or cerebral presentations. More stringent clinical criteria, with additional neuroimaging findings, are required for diagnosis of NMOSD without AQP4IgG or when serologic testing is unavailable. The IPND also proposed validation strategies and achieved consensus on pediatric NMOSD diagnosis and the concepts of monophasic NMOSD and opticospinal MS. Neurology® 2015;85:1–13 GLOSSARY ADEM 5 acute disseminated encephalomyelitis; AQP4 5 aquaporin-4; IgG 5 immunoglobulin G; IPND 5 International Panel for NMO Diagnosis; LETM 5 longitudinally extensive transverse myelitis lesions; MOG 5 myelin oligodendrocyte glycoprotein; MS 5 multiple sclerosis; NMO 5 neuromyelitis optica; NMOSD 5 neuromyelitis optica spectrum disorders; SLE 5 systemic lupus erythematosus; SS 5 Sjögren syndrome. Neuromyelitis optica (NMO) is an inflammatory CNS disorder distinct from multiple sclerosis (MS). It became known as Devic disease following a seminal 1894 report. Traditionally, NMO was considered a monophasic disorder consisting of simultaneous bilateral optic neuritis and transverse myelitis but relapsing cases were described in the 20th century. MRI revealed normal brain scans and$3 vertebral segment longitudinally extensive transverse myelitis lesions (LETM) in NMO. The nosology of NMO, especially whether it represented a topographically restricted form of MS, remained controversial. A major advance was the discovery that most patients with NMO have detectable serum antibodies that target the water channel aquaporin-4 (AQP4–immunoglobulin G [IgG]), are highly specific for clinically diagnosed NMO, and have pathogenic potential. In 2006, AQP4-IgG serology was incorporated into revised NMO diagnostic criteria that relaxed clinical From the Departments of Neurology (D.M.W.) and Library Services (K.E.W.), Mayo Clinic, Scottsdale, AZ; the Children’s Hospital of Philadelphia (B.B.), PA; the Departments of Neurology and Ophthalmology (J.L.B.), University of Colorado Denver, Aurora; the Service de Neurologie (P.C.), Centre Hospitalier Universitaire de Fort de France, Fort-de-France, Martinique; Department of Neurology (W.C.), Sir Charles Gairdner Hospital, Perth, Australia; the Department of Neurology (T.C.), Massachusetts General Hospital, Boston; the Department of Neurology (J.d.S.), Strasbourg University, France; the Department of Multiple Sclerosis Therapeutics (K.F.), Tohoku University Graduate School of Medicine, Sendai, Japan; the Departments of Neurology and Neurotherapeutics (B.G.), University of Texas Southwestern Medical Center, Dallas; The Walton Centre NHS Trust (A.J.), Liverpool, UK; the Molecular Neuroimmunology Group, Department of Neurology (S.J.), University Hospital Heidelberg, Germany; the Center for Multiple Sclerosis Investigation (M.L.-P.), Federal University of Minas Gerais Medical School, Belo Horizonte, Brazil; the Department of Neurology (M.L.), Johns Hopkins University, Baltimore, MD; Portland VA Medical Center and Oregon Health and Sciences University (J.H.S.), Portland; the Department of Neurology (S.T.), National Pediatric Hospital Dr. Juan P. Garrahan, Buenos Aires, Argentina; the Department of Medicine (A.L.T.), University of British Columbia, Vancouver, Canada; Nuffield Department of Clinical Neurosciences (P.W.), University of Oxford, UK; and the Department of Neurology (B.G.W.), Mayo Clinic, Rochester, MN. Go to Neurology.org for full disclosures. Funding information and disclosures deemed relevant by the authors, if any, are provided at the end of the article. The Article Processing Charge was paid by the Guthy-Jackson Charitable Foundation. This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND), which permits downloading and sharing the work provided it is properly cited. The work cannot be changed in any way or used commercially. © 2015 American Academy of Neurology 1 a 2015 American Academy of Neurology. Unauthorized reproduction of this article is prohibited. Published Ahead of Print on June 19, 2015 as 10.1212/WNL.0000000000001729",
"title": ""
},
{
"docid": "6a1073b72ef20fd59e705400dbdcc868",
"text": "In today’s world, there is a continuous global need for more energy which, at the same time, has to be cleaner than the energy produced from the traditional generation technologies. This need has facilitated the increasing penetration of distributed generation (DG) technologies and primarily of renewable energy sources (RES). The extensive use of such energy sources in today’s electricity networks can indisputably minimize the threat of global warming and climate change. However, the power output of these energy sources is not as reliable and as easy to adjust to changing demand cycles as the output from the traditional power sources. This disadvantage can only be effectively overcome by the storing of the excess power produced by DG-RES. Therefore, in order for these new sources to become completely reliable as primary sources of energy, energy storage is a crucial factor. In this work, an overview of the current and future energy storage technologies used for electric power applications is carried out. Most of the technologies are in use today while others are still under intensive research and development. A comparison between the various technologies is presented in terms of the most important technological characteristics of each technology. The comparison shows that each storage technology is different in terms of its ideal network application environment and energy storage scale. This means that in order to achieve optimum results, the unique network environment and the specifications of the storage device have to be studied thoroughly, before a decision for the ideal storage technology to be selected is taken. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e67bb4c784b89b2fee1ab7687b545683",
"text": "Many people have a strong intuition that there is something morallyobjectionable about playing violent video games, particularly withincreases in the number of people who are playing them and the games'alleged contribution to some highly publicized crimes. In this paper,I use the framework of utilitarian, deontological, and virtue ethicaltheories to analyze the possibility that there might be some philosophicalfoundation for these intuitions. I raise the broader question of whetheror not participating in authentic simulations of immoral acts in generalis wrong. I argue that neither the utilitarian, nor the Kantian hassubstantial objections to violent game playing, although they offersome important insights into playing games in general and what it ismorally to be a ``good sport.'' The Aristotelian, however, has a plausibleand intuitive way to protest participation in authentic simulations ofviolent acts in terms of character: engaging in simulated immoral actserodes one's character and makes it more difficult for one to live afulfilled eudaimonic life.",
"title": ""
},
{
"docid": "b36cc742445db810d40c884a90e2cf42",
"text": "Telecommunication sector generates a huge amount of data due to increasing number of subscribers, rapidly renewable technologies; data based applications and other value added service. This data can be usefully mined for churn analysis and prediction. Significant research had been undertaken by researchers worldwide to understand the data mining practices that can be used for predicting customer churn. This paper provides a review of around 100 recent journal articles starting from year 2000 to present the various data mining techniques used in multiple customer based churn models. It then summarizes the existing telecom literature by highlighting the sample size used, churn variables employed and the findings of different DM techniques. Finally, we list the most popular techniques for churn prediction in telecom as decision trees, regression analysis and clustering, thereby providing a roadmap to new researchers to build upon novel churn management models.",
"title": ""
},
{
"docid": "6bf38b6decda962ea03ab429f5fbde4f",
"text": "Frame semantic representations have been useful in several applications ranging from text-to-scene generation, to question answering and social network analysis. Predicting such representations from raw text is, however, a challenging task and corresponding models are typically only trained on a small set of sentence-level annotations. In this paper, we present a semantic role labeling system that takes into account sentence and discourse context. We introduce several new features which we motivate based on linguistic insights and experimentally demonstrate that they lead to significant improvements over the current state-of-the-art in FrameNet-based semantic role labeling.",
"title": ""
},
{
"docid": "5e86f40cfc3b2e9664ea1f7cc5bf730c",
"text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.",
"title": ""
},
{
"docid": "e16bf4ab7c56b6827369f19afb2d4744",
"text": "In acoustic modeling for large vocabulary continuous speech recognition, it is essential to model long term dependency within speech signals. Usually, recurrent neural network (RNN) architectures, especially the long short term memory (LSTM) models, are the most popular choice. Recently, a novel architecture, namely feedforward sequential memory networks (FSMN), provides a non-recurrent architecture to model long term dependency in sequential data and has achieved better performance over RNNs on acoustic modeling and language modeling tasks. In this work, we propose a compact feedforward sequential memory networks (cFSMN) by combining FSMN with low-rank matrix factorization. We also make a slight modification to the encoding method used in FSMNs in order to further simplify the network architecture. On the Switchboard task, the proposed new cFSMN structures can reduce the model size by 60% and speed up the learning by more than 7 times while the models still significantly outperform the popular bidirection LSTMs for both frame-level cross-entropy (CE) criterion based training and MMI based sequence training.",
"title": ""
},
{
"docid": "fbcdb3d565519b47922394dc9d84985f",
"text": "We present a novel end-to-end trainable neural network model for task-oriented dialog systems. The model is able to track dialog state, issue API calls to knowledge base (KB), and incorporate structured KB query results into system responses to successfully complete task-oriented dialogs. The proposed model produces well-structured system responses by jointly learning belief tracking and KB result processing conditioning on the dialog history. We evaluate the model in a restaurant search domain using a dataset that is converted from the second Dialog State Tracking Challenge (DSTC2) corpus. Experiment results show that the proposed model can robustly track dialog state given the dialog history. Moreover, our model demonstrates promising results in producing appropriate system responses, outperforming prior end-to-end trainable neural network models using per-response accuracy evaluation metrics.",
"title": ""
},
{
"docid": "b8322d65e61be7fb252b2e418df85d3e",
"text": "the od. cted ly genof 997 Abstract. Algorithms of filtering, edge detection, and extraction of details and their implementation using cellular neural networks (CNN) are developed in this paper. The theory of CNN based on universal binary neurons (UBN) is also developed. A new learning algorithm for this type of neurons is carried out. Implementation of low-pass filtering algorithms using CNN is considered. Separate processing of the binary planes of gray-scale images is proposed. Algorithms of edge detection and impulsive noise filtering based on this approach and their implementation using CNN-UBN are presented. Algorithms of frequency correction reduced to filtering in the spatial domain are considered. These algorithms make it possible to extract details of given sizes. Implementation of such algorithms using CNN is presented. Finally, a general strategy of gray-scale image processing using CNN is considered. © 1997 SPIE and IS&T. [S1017-9909(97)00703-4]",
"title": ""
},
{
"docid": "b4bd19c2285199e280cb41e733ec5498",
"text": "In the past few years, mobile augmented reality (AR) has attracted a great deal of attention. It presents us a live, direct or indirect view of a real-world environment whose elements are augmented (or supplemented) by computer-generated sensory inputs such as sound, video, graphics or GPS data. Also, deep learning has the potential to improve the performance of current AR systems. In this paper, we propose a distributed mobile logo detection framework. Our system consists of mobile AR devices and a back-end server. Mobile AR devices can capture real-time videos and locally decide which frame should be sent to the back-end server for logo detection. The server schedules all detection jobs to minimise the maximum latency. We implement our system on the Google Nexus 5 and a desktop with a wireless network interface. Evaluation results show that our system can detect the view change activity with an accuracy of 95:7% and successfully process 40 image processing jobs before deadline. ARTICLE HISTORY Received 6 June 2018 Accepted 30 June 2018",
"title": ""
},
{
"docid": "cc2a7d6ac63f12b29a6d30f20b5547be",
"text": "The CyberDesk project is aimed at providing a software architecture that dynamically integrates software modules. This integration is driven by a user’s context, where context includes the user’s physical, social, emotional, and mental (focus-of-attention) environments. While a user’s context changes in all settings, it tends to change most frequently in a mobile setting. We have used the CyberDesk ystem in a desktop setting and are currently using it to build an intelligent home nvironment.",
"title": ""
},
{
"docid": "5b0b8da7faa91343bad6296fd7cb181f",
"text": "Transportation research relies heavily on a variety of data. From sensors to surveys, data supports dayto-day operations as well as long-term planning and decision-making. The challenges that arise due to the volume and variety of data that are found in transportation research can be effectively addressed by ontologies. This opportunity has already been recognized – there are a number of existing transportation ontologies, however the relationship between them is unclear. The goal of this work is to provide an overview of the opportunities for ontologies in transportation research and operation, and to present a survey of existing transportation ontologies to serve two purposes: (1) to provide a resource for the transportation research community to aid in understanding (and potentially selecting between) existing transportation ontologies; and (2) to identify future work for the development of transportation ontologies, by identifying areas that may be lacking.",
"title": ""
},
{
"docid": "96f616c7a821c1f74fc77e5649483343",
"text": "Study of the forecasting models using large scale microblog discussions and the search behavior data can provide a good insight for better understanding the market movements. In this work we collected a dataset of 2 million tweets and search volume index (SVI from Google) for a period of June 2010 to September 2011. We model a set of comprehensive causative relationships over this dataset for various market securities like equity (Dow Jones Industrial Average-DJIA and NASDAQ-100), commodity markets (oil and gold) and Euro Forex rates. We also investigate the lagged and statistically causative relations of Twitter sentiments developed during active trading days and market inactive days in combination with the search behavior of public before any change in the prices/indices. Our results show extent of lagged significance with high correlation value upto 0.82 between search volumes and gold price in USD. We find weekly accuracy in direction (up and down prediction) uptil 94.3% for DJIA and 90% for NASDAQ-100 with significant reduction in mean average percentage error for all the forecasting models.",
"title": ""
},
{
"docid": "a5168d6ca63300f26b7388f67d10cb3c",
"text": "In recent years, the improvement of wireless protocols, the development of cloud services and the lower cost of hardware have started a new era for smart homes. One such enabling technologies is fog computing, which extends cloud computing to the edge of a network allowing for developing novel Internet of Things (IoT) applications and services. Under the IoT fog computing paradigm, IoT gateways are usually utilized to exchange messages with IoT nodes and a cloud. WiFi and ZigBee stand out as preferred communication technologies for smart homes. WiFi has become very popular, but it has a limited application due to its high energy consumption and the lack of standard mesh networking capabilities for low-power devices. For such reasons, ZigBee was selected by many manufacturers for developing wireless home automation devices. As a consequence, these technologies may coexist in the 2.4 GHz band, which leads to collisions, lower speed rates and increased communications latencies. This article presents ZiWi, a distributed fog computing Home Automation System (HAS) that allows for carrying out seamless communications among ZigBee and WiFi devices. This approach diverges from traditional home automation systems, which often rely on expensive central controllers. In addition, to ease the platform's building process, whenever possible, the system makes use of open-source software (all the code of the nodes is available on GitHub) and Commercial Off-The-Shelf (COTS) hardware. The initial results, which were obtained in a number of representative home scenarios, show that the developed fog services respond several times faster than the evaluated cloud services, and that cross-interference has to be taken seriously to prevent collisions. In addition, the current consumption of ZiWi's nodes was measured, showing the impact of encryption mechanisms.",
"title": ""
},
{
"docid": "52d2ff16f6974af4643a15440ae09fec",
"text": "The adoption of Course Management Systems (CMSs) for web-based instruction continues to increase in today’s higher education. A CMS is a software program or integrated platform that contains a series of web-based tools to support a number of activities and course management procedures (Severson, 2004). Examples of Course Management Systems are Blackboard, WebCT, eCollege, Moodle, Desire2Learn, Angel, etc. An argument for the adoption of elearning environments using CMSs is the flexibility of such environments when reaching out to potential learners in remote areas where brick and mortar institutions are non-existent. It is also believed that e-learning environments can have potential added learning benefits and can improve students’ and educators’ self-regulation skills, in particular their metacognitive skills. In spite of this potential to improve learning by means of using a CMS for the delivery of e-learning, the features and functionalities that have been built into these systems are often underutilized. As a consequence, the created learning environments in CMSs do not adequately scaffold learners to improve their selfregulation skills. In order to support the improvement of both the learners’ subject matter knowledge and learning strategy application, the e-learning environments within CMSs should be designed to address learners’ diversity in terms of learning styles, prior knowledge, culture, and self-regulation skills. Self-regulative learners are learners who can demonstrate ‘personal initiative, perseverance and adaptive skill in pursuing learning’ (Zimmerman, 2002). Self-regulation requires adequate monitoring strategies and metacognitive skills. The created e-learning environments should encourage the application of learners’ metacognitive skills by prompting learners to plan, attend to relevant content, and monitor and evaluate their learning. This position paper sets out to inform policy makers, educators, researchers, and others of the importance of a metacognitive e-learning approach when designing instruction using Course Management Systems. Such a metacognitive approach will improve the utilization of CMSs to support learners on their path to self-regulation. We argue that a powerful CMS incorporates features and functionalities that can provide extensive scaffolding to learners and support them in becoming self-regulated learners. Finally, we believe that extensive training and support is essential if educators are expected to develop and implement CMSs as powerful learning tools.",
"title": ""
}
] |
scidocsrr
|
a7cce9cae6e35e04b891a7e3a340ab2b
|
Sex & the City . How Emotional Factors Affect Financial Choices
|
[
{
"docid": "4fa7ee44cdc4b0cd439723e9600131bd",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/ucpress.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "9a1201d68018fce5ce413511dc64e8b7",
"text": "In the health sciences it is quite common to carry out studies designed to determine the influence of one or more variables upon a given response variable. When this response variable is numerical, simple or multiple regression techniques are used, depending on the case. If the response variable is a qualitative variable (dichotomic or polychotomic), as for example the presence or absence of a disease, linear regression methodology is not applicable, and simple or multinomial logistic regression is used, as applicable.",
"title": ""
}
] |
[
{
"docid": "c367d19e00816538753e6226785d05fd",
"text": "BACKGROUND AND OBJECTIVE\nMildronate, an inhibitor of carnitine-dependent metabolism, is considered to be an anti-ischemic drug. This study is designed to evaluate the efficacy and safety of mildronate injection in treating acute ischemic stroke.\n\n\nMETHODS\nWe performed a randomized, double-blind, multicenter clinical study of mildronate injection for treating acute cerebral infarction. 113 patients in the experimental group received mildronate injection, and 114 patients in the active-control group received cinepazide injection. In addition, both groups were given aspirin as a basic treatment. Modified Rankin Scale (mRS) score was performed at 2 weeks and 3 months after treatment. National Institutes of Health Stroke Scale (NIHSS) score and Barthel Index (BI) score were performed at 2 weeks after treatment, and then vital signs and adverse events were evaluated.\n\n\nRESULTS\nA total of 227 patients were randomized to treatment (n = 113, mildronate; n = 114, active-control). After 3 months, there was no significant difference for the primary endpoint between groups categorized in terms of mRS scores of 0-1 and 0-2 (p = 0.52 and p = 0.07, respectively). There were also no significant differences for the secondary endpoint between groups categorized in terms of NIHSS scores of >5 and >8 (p = 0.98 and p = 0.97, respectively) or BI scores of >75 and >95 (p = 0.49 and p = 0.47, respectively) at 15 days. The incidence of serious adverse events was similar between the two groups.\n\n\nCONCLUSION\nMildronate injection is as effective and safe as cinepazide injection in treating acute cerebral infarction.",
"title": ""
},
{
"docid": "b2332b118b846c9f417558a02975e20a",
"text": "This is the third in a series of four tutorial papers on biomedical signal processing and concerns the estimation of the power spectrum (PS) and coherence function (CF) od biomedical data. The PS is introduced and its estimation by means of the discrete Fourier transform is considered in terms of the problem of resolution in the frequency domain. The periodogram is introduced and its variance, bias and the effects of windowing and smoothing are considered. The use of the autocovariance function as a stage in power spectral estimation is described and the effects of windows in the autocorrelation domain are compared with the related effects of windows in the original time domain. The concept of coherence is introduced and the many ways in which coherence functions might be estimated are considered.",
"title": ""
},
{
"docid": "99ea14010fe3acd37952fb355a25b71c",
"text": "Today, as the increasing the amount of using internet, there are so most information interchanges are performed in that internet. So, the methods used as intrusion detective tools for protecting network systems against diverse attacks are became too important. The available of IDS are getting more powerful. Support Vector Machine was used as the classical pattern reorganization tools have been widely used for Intruder detections. There have some different characteristic of features in building an Intrusion Detection System. Conventional SVM do not concern about that. Our enhanced SVM Model proposed with an Recursive Feature Elimination (RFE) and kNearest Neighbor (KNN) method to perform a feature ranking and selection task of the new model. RFE can reduce redundant & recursive features and KNN can select more precisely than the conventional SVM. Experiments and comparisons are conducted through intrusion dataset: the KDD Cup 1999 dataset.",
"title": ""
},
{
"docid": "001784246312172835ceca4461ec28c5",
"text": "Abstract-The string-searching problem is to find all occurrences of pattern(s) in a text string. The Aho-Corasick string searching algorithm simultaneously finds all occurrences of multiple patterns in one pass through the text. On the other hand, the Boyer-Moore algorithm is understood to be the fastest algorithm for a single pattern. By combining the ideas of these two algorithms, we present an efficient string searching algorithm for multiple patterns. The algorithm runs in sublinear time, on the average, as the BM algorithm achieves, and its preprocessing time is linear proportional to the sum of the lengths of the patterns like the AC algorithm.",
"title": ""
},
{
"docid": "8e1b10ebb48b86ce151ab44dc0473829",
"text": "─ Cuckoo Search (CS) is a new met heuristic algorithm. It is being used for solving optimization problem. It was developed in 2009 by XinShe Yang and Susah Deb. Uniqueness of this algorithm is the obligatory brood parasitism behavior of some cuckoo species along with the Levy Flight behavior of some birds and fruit flies. Cuckoo Hashing to Modified CS have also been discussed in this paper. CS is also validated using some test functions. After that CS performance is compared with those of GAs and PSO. It has been shown that CS is superior with respect to GAs and PSO. At last, the effect of the experimental results are discussed and proposed for future research. Index terms ─ Cuckoo search, Levy Flight, Obligatory brood parasitism, NP-hard problem, Markov Chain, Hill climbing, Heavy-tailed algorithm.",
"title": ""
},
{
"docid": "2a5f555c00d98a87fe8dd6d10e27dc38",
"text": "Neurodegeneration is a phenomenon that occurs in the central nervous system through the hallmarks associating the loss of neuronal structure and function. Neurodegeneration is observed after viral insult and mostly in various so-called 'neurodegenerative diseases', generally observed in the elderly, such as Alzheimer's disease, multiple sclerosis, Parkinson's disease and amyotrophic lateral sclerosis that negatively affect mental and physical functioning. Causative agents of neurodegeneration have yet to be identified. However, recent data have identified the inflammatory process as being closely linked with multiple neurodegenerative pathways, which are associated with depression, a consequence of neurodegenerative disease. Accordingly, pro‑inflammatory cytokines are important in the pathophysiology of depression and dementia. These data suggest that the role of neuroinflammation in neurodegeneration must be fully elucidated, since pro‑inflammatory agents, which are the causative effects of neuroinflammation, occur widely, particularly in the elderly in whom inflammatory mechanisms are linked to the pathogenesis of functional and mental impairments. In this review, we investigated the role played by the inflammatory process in neurodegenerative diseases.",
"title": ""
},
{
"docid": "350aeae5c69db969c35c673c0be2a98a",
"text": "Driver yawning detection is one of the key technologies used in driver fatigue monitoring systems. Real-time driver yawning detection is a very challenging problem due to the dynamics in driver's movements and lighting conditions. In this paper, we present a yawning detection system that consists of a face detector, a nose detector, a nose tracker and a yawning detector. Deep learning algorithms are developed for detecting driver face area and nose location. A nose tracking algorithm that combines Kalman filter with a dedicated open-source TLD (Track-Learning-Detection) tracker is developed to generate robust tracking results under dynamic driving conditions. Finally a neural network is developed for yawning detection based on the features including nose tracking confidence value, gradient features around corners of mouth and face motion features. Experiments are conducted on real-world driving data, and results show that the deep convolutional networks can generate a satisfactory classification result for detecting driver's face and nose when compared with other pattern classification methods, and the proposed yawning detection system is effective in real-time detection of driver's yawning states.",
"title": ""
},
{
"docid": "ab2c4d5317d2e10450513283c21ca6d3",
"text": "We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don’t match between call logs and address book entries on the same phone.",
"title": ""
},
{
"docid": "ed28faf2ff89ac4da642593e1b7eef9c",
"text": "Massive MIMO, also known as very-large MIMO or large-scale antenna systems, is a new technique that potentially can offer large network capacities in multi-user scenarios. With a massive MIMO system, we consider the case where a base station equipped with a large number of antenna elements simultaneously serves multiple single-antenna users in the same time-frequency resource. So far, investigations are mostly based on theoretical channels with independent and identically distributed (i.i.d.) complex Gaussian coefficients, i.e., i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in channels measured in real propagation environments. Channel measurements were performed at 2.6 GHz using a virtual uniform linear array (ULA), which has a physically large aperture, and a practical uniform cylindrical array (UCA), which is more compact in size, both having 128 antenna ports. Based on measurement data, we illustrate channel behavior of massive MIMO in three representative propagation conditions, and evaluate the corresponding performance. The investigation shows that the measured channels, for both array types, allow us to achieve performance close to that in i.i.d. Rayleigh channels. It is concluded that in real propagation environments we have characteristics that can allow for efficient use of massive MIMO, i.e., the theoretical advantages of this new technology can also be harvested in real channels.",
"title": ""
},
{
"docid": "1dbb34265c9b01f69262b3270fa24e97",
"text": "Binary content-addressable memory (BiCAM) is a popular high speed search engine in hardware, which provides output typically in one clock cycle. But speed of CAM comes at the cost of various disadvantages, such as high latency, low storage density, and low architectural scalability. In addition, field-programmable gate arrays (FPGAs), which are used in many applications because of its advantages, do not have hard IPs for CAM. Since FPGAs have embedded IPs for random-access memories (RAMs), several RAM-based CAM architectures on FPGAs are available in the literature. However, these architectures are especially targeted for ternary CAMs, not for BiCAMs; thus, the available RAM-based CAMs may not be fully beneficial for BiCAMs in terms of architectural design. Since modern FPGAs are enriched with logical resources, why not to configure them to design BiCAM on FPGA? This letter presents a logic-based high performance BiCAM architecture (LH-CAM) using Xilinx FPGA. The proposed CAM is composed of CAM words and associated comparators. A sample of LH-CAM of size ${64\\times 36}$ is implemented on Xilinx Virtex-6 FPGA. Compared with the latest prior work, the proposed CAM is much simpler in architecture, storage efficient, reduces power consumption by 40.92%, and improves speed by 27.34%.",
"title": ""
},
{
"docid": "95e2a8e2d1e3a1bbfbf44d20f9956cf0",
"text": "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to stateof-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: //github.com/mrlyk423/relation extraction.",
"title": ""
},
{
"docid": "15b05bdc1310d038110b545686082c98",
"text": "The class of materials combining high electrical or thermal conductivity, optical transparency and flexibility is crucial for the development of many future electronic and optoelectronic devices. Silver nanowire networks show very promising results and represent a viable alternative to the commonly used, scarce and brittle indium tin oxide. The science and technology research of such networks are reviewed to provide a better understanding of the physical and chemical properties of this nanowire-based material while opening attractive new applications.",
"title": ""
},
{
"docid": "b8b2d68955d6ed917900d30e4e15f71e",
"text": "Due to the explosive growth of wireless devices and wireless traffic, the spectrum scarcity problem is becoming more urgent in numerous Radio Frequency (RF) systems. At the same time, many studies have shown that spectrum resources allocated to various existing RF systems are largely underutilized. As a potential solution to this spectrum scarcity problem, spectrum sharing among multiple, potentially dissimilar RF systems has been proposed. However, such spectrum sharing solutions are challenging to develop due to the lack of efficient coordination schemes and potentially different PHY/MAC properties. In this paper, we investigate existing spectrum sharing methods facilitating coexistence of various RF systems. The cognitive radio technique, which has been the subject of various surveys, constitutes a subset of our wider scope. We study more general coexistence scenarios and methods such as coexistence of communication systems with similar priorities, utilizing similar or different protocols or standards, as well as the coexistence of communication and non-communication systems using the same spectral resources. Finally, we explore open research issues on the spectrum sharing methods as well as potential approaches to resolving these issues. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "11b05bd0c0b5b9319423d1ec0441e8a7",
"text": "Today’s huge volumes of data, heterogeneous information and communication technologies, and borderless cyberinfrastructures create new challenges for security experts and law enforcement agencies investigating cybercrimes. The future of digital forensics is explored, with an emphasis on these challenges and the advancements needed to effectively protect modern societies and pursue cybercriminals.",
"title": ""
},
{
"docid": "d19a77b3835b7b43acf57da377b11cb4",
"text": "Given the importance of relation or event extraction from biomedical research publications to support knowledge capture and synthesis, and the strong dependency of approaches to this information extraction task on syntactic information, it is valuable to understand which approaches to syntactic processing of biomedical text have the highest performance. We perform an empirical study comparing state-of-the-art traditional feature-based and neural network-based models for two core natural language processing tasks of part-of-speech (POS) tagging and dependency parsing on two benchmark biomedical corpora, GENIA and CRAFT. To the best of our knowledge, there is no recent work making such comparisons in the biomedical context; specifically no detailed analysis of neural models on this data is available. Experimental results show that in general, the neural models outperform the feature-based models on two benchmark biomedical corpora GENIA and CRAFT. We also perform a task-oriented evaluation to investigate the influences of these models in a downstream application on biomedical event extraction, and show that better intrinsic parsing performance does not always imply better extrinsic event extraction performance. We have presented a detailed empirical study comparing traditional feature-based and neural network-based models for POS tagging and dependency parsing in the biomedical context, and also investigated the influence of parser selection for a biomedical event extraction downstream task. We make the retrained models available at https://github.com/datquocnguyen/BioPosDep.",
"title": ""
},
{
"docid": "9bf99d48bc201147a9a9ad5af547a002",
"text": "Consider a biped evolving in the sagittal plane. The unexpected rotation of the supporting foot can be avoided by controlling the zero moment point (ZMP). The objective of this study is to propose and analyze a control strategy for simultaneously regulating the position of the ZMP and the joints of the robot. If the tracking requirements were posed in the time domain, the problem would be underactuated in the sense that the number of inputs would be less than the number of outputs. To get around this issue, the proposed controller is based on a path-following control strategy, previously developed for dealing with the underactuation present in planar robots without actuated ankles. In particular, the control law is defined in such a way that only the kinematic evolution of the robot's state is regulated, but not its temporal evolution. The asymptotic temporal evolution of the robot is completely defined through a one degree-of-freedom subsystem of the closed-loop model. Since the ZMP is controlled, bipedal walking that includes a prescribed rotation of the foot about the toe can also be considered. Simple analytical conditions are deduced that guarantee the existence of a periodic motion and the convergence toward this motion.",
"title": ""
},
{
"docid": "4bfc1e2fbb2b1dea29360c410e5258b4",
"text": "Fault tolerance is gaining interest as a means to increase the reliability and availability of distributed energy systems. In this paper, a voltage-oriented doubly fed induction generator, which is often used in wind turbines, is examined. Furthermore, current, voltage, and position sensor fault detection, isolation, and reconfiguration are presented. Machine operation is not interrupted. A bank of observers provides residuals for fault detection and replacement signals for the reconfiguration. Control is temporarily switched from closed loop into open-loop to decouple the drive from faulty sensor readings. During a short period of open-loop operation, the fault is isolated using parity equations. Replacement signals from observers are used to reconfigure the drive and reenter closed-loop control. There are no large transients in the current. Measurement results and stability analysis show good results.",
"title": ""
},
{
"docid": "35dda21bd1f2c06a446773b0bfff2dd7",
"text": "Mobile devices and their application marketplaces drive the entire economy of the today’s mobile landscape. Android platforms alone have produced staggering revenues, exceeding five billion USD, which has attracted cybercriminals and increased malware in Android markets at an alarming rate. To better understand this slew of threats, we present CopperDroid, an automatic VMI-based dynamic analysis system to reconstruct the behaviors of Android malware. The novelty of CopperDroid lies in its agnostic approach to identify interesting OSand high-level Android-specific behaviors. It reconstructs these behaviors by observing and dissecting system calls and, therefore, is resistant to the multitude of alterations the Android runtime is subjected to over its life-cycle. CopperDroid automatically and accurately reconstructs events of interest that describe, not only well-known process-OS interactions (e.g., file and process creation), but also complex intraand inter-process communications (e.g., SMS reception), whose semantics are typically contextualized through complex Android objects. Because CopperDroid’s reconstruction mechanisms are agnostic to the underlying action invocation methods, it is able to capture actions initiated both from Java and native code execution. CopperDroid’s analysis generates detailed behavioral profiles that abstract a large stream of low-level—often uninteresting—events into concise, high-level semantics, which are well-suited to provide insightful behavioral traits and open the possibility to further research directions. We carried out an extensive evaluation to assess the capabilities and performance of CopperDroid on more than 2,900 Android malware samples. Our experiments show that CopperDroid faithfully reconstructs OSand Android-specific behaviors. Additionally, we demonstrate how CopperDroid can be leveraged to disclose additional behaviors through the use of a simple, yet effective, app stimulation technique. Using this technique, we successfully triggered and disclosed additional behaviors on more than 60% of the analyzed malware samples. This qualitatively demonstrates the versatility of CopperDroid’s ability to improve dynamic-based code coverage.",
"title": ""
},
{
"docid": "df78e51c3ed3a6924bf92db6000062e1",
"text": "We study the problem of computing all Pareto-optimal journeys in a dynamic public transit network for two criteria: arrival time and number of transfers. Existing algorithms consider this as a graph problem, and solve it using variants of Dijkstra’s algorithm. Unfortunately, this leads to either high query times or suboptimal solutions. We take a different approach. We introduce RAPTOR, our novel round-based public transit router. Unlike previous algorithms, it is not Dijkstrabased, looks at each route (such as a bus line) in the network at most once per round, and can be made even faster with simple pruning rules and parallelization using multiple cores. Because it does not rely on preprocessing, RAPTOR works in fully dynamic scenarios. Moreover, it can be easily extended to handle flexible departure times or arbitrary additional criteria, such as fare zones. When run on London’s complex public transportation network, RAPTOR computes all Paretooptimal journeys between two random locations an order of magnitude faster than previous approaches, which easily enables interactive applications.",
"title": ""
},
{
"docid": "6d5480bf1ee5d401e39f5e65d0aaba25",
"text": "Engagement is a key reason for introducing gamification to learning and thus serves as an important measurement of its effectiveness. Based on a literature review and meta-synthesis, this paper proposes a comprehensive framework of engagement in gamification for learning. The framework sketches out the connections among gamification strategies, dimensions of engagement, and the ultimate learning outcome. It also elicits other task - and user - related factors that may potentially impact the effect of gamification on learner engagement. To verify and further strengthen the framework, we conducted a user study to demonstrate that: 1) different gamification strategies can trigger different facets of engagement; 2) the three dimensions of engagement have varying effects on skill acquisition and transfer; and 3) task nature and learner characteristics that were overlooked in previous studies can influence the engagement process. Our framework provides an in-depth understanding of the mechanism of gamification for learning, and can serve as a theoretical foundation for future research and design.",
"title": ""
}
] |
scidocsrr
|
8d77035c1879c1e48446e074ee226c60
|
Case Studies of Damage to Tall Steel Moment-Frame Buildings in Southern California during Large San Andreas Earthquakes
|
[
{
"docid": "a112a01246256e38b563f616baf02cef",
"text": "This is the second of two papers describing a procedure for the three dimensional nonlinear timehistory analysis of steel framed buildings. An overview of the procedure and the theory for the panel zone element and the plastic hinge beam element are presented in Part I. In this paper, the theory for an efficient new element for modeling beams and columns in steel frames called the elastofiber element is presented, along with four illustrative examples. The elastofiber beam element is divided into three segments two end nonlinear segments and an interior elastic segment. The cross-sections of the end segments are subdivided into fibers. Associated with each fiber is a nonlinear hysteretic stress-strain law for axial stress and strain. This accounts for coupling of nonlinear material behavior between bending about the major and minor axes of the cross-section and axial deformation. Examples presented include large deflection of an elastic cantilever, cyclic loading of a cantilever beam, pushover analysis of a 20-story steel moment-frame building to collapse, and strong ground motion analysis of a 2-story unsymmetric steel moment-frame building. 1Post-Doctoral Scholar, Seismological Laboratory, MC 252-21, California Institute of Technology, Pasadena, CA91125. Email: [email protected] 2Professor, Civil Engineering and Applied Mechanics, MC 104-44, California Institute of Technology, Pasadena, CA-91125",
"title": ""
}
] |
[
{
"docid": "b20720aa8ea6fa5fc0f738a605534fbe",
"text": "e proliferation of social media in communication and information dissemination has made it an ideal platform for spreading rumors. Automatically debunking rumors at their stage of diusion is known as early rumor detection, which refers to dealing with sequential posts regarding disputed factual claims with certain variations and highly textual duplication over time. us, identifying trending rumors demands an ecient yet exible model that is able to capture long-range dependencies among postings and produce distinct representations for the accurate early detection. However, it is a challenging task to apply conventional classication algorithms to rumor detection in earliness since they rely on hand-craed features which require intensive manual eorts in the case of large amount of posts. is paper presents a deep aention model on the basis of recurrent neural networks (RNN) to learn selectively temporal hidden representations of sequential posts for identifying rumors. e proposed model delves so-aention into the recurrence to simultaneously pool out distinct features with particular focus and produce hidden representations that capture contextual variations of relevant posts over time. Extensive experiments on real datasets collected from social media websites demonstrate that (1) the deep aention based RNN model outperforms state-of-thearts that rely on hand-craed features; (2) the introduction of so aention mechanism can eectively distill relevant parts to rumors from original posts in advance; (3) the proposed method detects rumors more quickly and accurately than competitors.",
"title": ""
},
{
"docid": "ffef016fba37b3dc167a1afb7e7766f0",
"text": "We show that the Thompson Sampling algorithm achieves logarithmic expected regret for the Bernoulli multi-armed bandit problem. More precisely, for the two-armed bandit problem, the expected regret in time T is O( lnT ∆ + 1 ∆3 ). And, for the N -armed bandit problem, the expected regret in time T is O( [ ( ∑N i=2 1 ∆i ) ] lnT ). Our bounds are optimal but for the dependence on ∆i and the constant factors in big-Oh.",
"title": ""
},
{
"docid": "49a538fc40d611fceddd589b0c9cb433",
"text": "Both intuition and creativity are associated with knowledge creation, yet a clear link between them has not been adequately established. First, the available empirical evidence for an underlying relationship between intuition and creativity is sparse in nature. Further, this evidence is arguable as the concepts are diversely operationalized and the measures adopted are often not validated sufficiently. Combined, these issues make the findings from various studies examining the link between intuition and creativity difficult to replicate. Nevertheless, the role of intuition in creativity should not be neglected as it is often reported to be a core component of the idea generation process, which in conjunction with idea evaluation are crucial phases of creative cognition. We review the prior research findings in respect of idea generation and idea evaluation from the view that intuition can be construed as the gradual accumulation of cues to coherence. Thus, we summarize the literature on what role intuitive processes play in the main stages of the creative problem-solving process and outline a conceptual framework of the interaction between intuition and creativity. Finally, we discuss the main challenges of measuring intuition as well as possible directions for future research.",
"title": ""
},
{
"docid": "97281ba9e6da8460f003bb860836bb10",
"text": "In this letter, a novel miniaturized periodic element for constructing a bandpass frequency selective surface (FSS) is proposed. Compared to previous miniaturized structures, the FSS proposed has better miniaturization performance with the dimension of a unit cell only 0.061 λ × 0.061 λ , where λ represents the wavelength of the resonant frequency. Moreover, the miniaturization characteristic is stable with respect to different polarizations and incident angles of the waves illuminating. Both simulation and measurement are taken, and the results obtained demonstrate the claimed performance.",
"title": ""
},
{
"docid": "5bf9ebaecbcd4b713a52d3572e622cbd",
"text": "Essay scoring is a complicated processing requiring analyzing, summarizing and judging expertise. Traditional work on essay scoring focused on automatic handcrafted features, which are expensive yet sparse. Neural models offer a way to learn syntactic and semantic features automatically, which can potentially improve upon discrete features. In this paper, we employ convolutional neural network (CNN) for the effect of automatically learning features, and compare the result with the state-of-art discrete baselines. For in-domain and domain-adaptation essay scoring tasks, our neural model empirically outperforms discrete models.",
"title": ""
},
{
"docid": "931c75847fdfec787ad6a31a6568d9e3",
"text": "This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development.",
"title": ""
},
{
"docid": "182dc182f7c814c18cb83a0515149cec",
"text": "This paper discusses about methods for detection of leukemia. Various image processing techniques are used for identification of red blood cell and immature white cells. Different disease like anemia, leukemia, malaria, deficiency of vitamin B12, etc. can be diagnosed accordingly. Objective is to detect the leukemia affected cells and count it. According to detection of immature blast cells, leukemia can be identified and also define that either it is chronic or acute. To detect immature cells, number of methods are used like histogram equalization, linear contrast stretching, some morphological techniques like area opening, area closing, erosion, dilation. Watershed transform, K means, histogram equalization & linear contrast stretching, and shape based features are accurate 72.2%, 72%, 73.7 % and 97.8% respectively.",
"title": ""
},
{
"docid": "4f3e37db8d656fe1e746d6d3a37878b5",
"text": "Shorter product life cycles and aggressive marketing, among other factors, have increased the complexity of sales forecasting. Forecasts are often produced using a Forecasting Support System that integrates univariate statistical forecasting with managerial judgment. Forecasting sales under promotional activity is one of the main reasons to use expert judgment. Alternatively, one can replace expert adjustments by regression models whose exogenous inputs are promotion features (price, display, etc.). However, these regression models may have large dimensionality as well as multicollinearity issues. We propose a novel promotional model that overcomes these limitations. It combines Principal Component Analysis to reduce the dimensionality of the problem and automatically identifies the demand dynamics. For items with limited history, the proposed model is capable of providing promotional forecasts by selectively pooling information across established products. The performance of the model is compared against forecasts provided by experts and statistical benchmarks, on weekly data; outperforming both substantially.",
"title": ""
},
{
"docid": "ff076ca404a911cc523af1aa51da8f47",
"text": "Most Machine Learning (ML) researchers focus on automatic Machine Learning (aML) where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from the availability of “big data”. However, sometimes, for example in health informatics, we are confronted not a small number of data sets or rare events, and with complex problems where aML-approaches fail or deliver unsatisfactory results. Here, interactive Machine Learning (iML) may be of help and the “human-in-the-loop” approach may be beneficial in solving computationally hard problems, where human expertise can help to reduce an exponential search space through heuristics. In this paper, experiments are discussed which help to evaluate the effectiveness of the iML-“human-in-the-loop” approach, particularly in opening the “black box”, thereby enabling a human to directly and indirectly manipulating and interacting with an algorithm. For this purpose, we selected the Ant Colony Optimization (ACO) framework, and use it on the Traveling Salesman Problem (TSP) which is of high importance in solving many practical problems in health informatics, e.g. in the study of proteins.",
"title": ""
},
{
"docid": "e0ba4e4b7af3cba6bed51f2f697ebe5e",
"text": "In this paper, we argue that instead of solely focusing on developing efficient architectures to accelerate well-known low-precision CNNs, we should also seek to modify the network to suit the FPGA. We develop a fully automative toolflow which focuses on modifying the network through filter pruning, such that it efficiently utilizes the FPGA hardware whilst satisfying a predefined accuracy threshold. Although fewer weights are re-moved in comparison to traditional pruning techniques designed for software implementations, the overall model complexity and feature map storage is greatly reduced. We implement the AlexNet and TinyYolo networks on the large-scale ImageNet and PascalVOC datasets, to demonstrate up to roughly 2× speedup in frames per second and 2× reduction in resource requirements over the original network, with equal or improved accuracy.",
"title": ""
},
{
"docid": "15f0c49a2ddcb20cd8acaa419b2eae44",
"text": "Automatic generation of presentation slides for academic papers is a very challenging task. Previous methods for addressing this task are mainly based on document summarization techniques and they extract document sentences to form presentation slides, which are not well-structured and concise. In this study, we propose a phrase-based approach to generate well-structured and concise presentation slides for academic papers. Our approach first extracts phrases from the given paper, and then learns both the saliency of each phrase and the hierarchical relationship between a pair of phrases. Finally a greedy algorithm is used to select and align the salient phrases in order to form the well-structured presentation slides. Evaluation results on a real dataset verify the efficacy of our proposed approach.",
"title": ""
},
{
"docid": "864ab702d0b45235efe66cd9e3bc5e66",
"text": "In this work we release our extensible and easily configurable neural network training software. It provides a rich set of functional layers with a particular focus on efficient training of recurrent neural network topologies on multiple GPUs. The source of the software package is public and freely available for academic research purposes and can be used as a framework or as a standalone tool which supports a flexible configuration. The software allows to train state-of-the-art deep bidirectional long short-term memory (LSTM) models on both one dimensional data like speech or two dimensional data like handwritten text and was used to develop successful submission systems in several evaluation campaigns.",
"title": ""
},
{
"docid": "eede8e690991c27074a0485c7c046e17",
"text": "We performed meta-analyses on 60 neuroimaging (PET and fMRI) studies of working memory (WM), considering three types of storage material (spatial, verbal, and object), three types of executive function (continuous updating of WM, memory for temporal order, and manipulation of information in WM), and interactions between material and executive function. Analyses of material type showed the expected dorsal-ventral dissociation between spatial and nonspatial storage in the posterior cortex, but not in the frontal cortex. Some support was found for left frontal dominance in verbal WM, but only for tasks with low executive demand. Executive demand increased right lateralization in the frontal cortex for spatial WM. Tasks requiring executive processing generally produce more dorsal frontal activations than do storage-only tasks, but not all executive processes show this pattern. Brodmann's areas (BAs) 6, 8, and 9, in the superior frontal cortex, respond most when WM must be continuously updated and when memory for temporal order must be maintained. Right BAs 10 and 47, in the ventral frontal cortex, respond more frequently with demand for manipulation (including dual-task requirements or mental operations). BA 7, in the posterior parietal cortex, is involved in all types of executive function. Finally, we consider a potential fourth executive function: selective attention to features of a stimulus to be stored in WM, which leads to increased probability of activating the medial prefrontal cortex (BA 32) in storage tasks.",
"title": ""
},
{
"docid": "e55067bddff5f7f3cb646d02342f419c",
"text": "Over the last two decades there have been several process models proposed (and used) for data and information fusion. A common theme of these models is the existence of multiple levels of processing within the data fusion process. In the 1980’s three models were adopted: the intelligence cycle, the JDL model and the Boyd control. The 1990’s saw the introduction of the Dasarathy model and the Waterfall model. However, each of these models has particular advantages and disadvantages. A new model for data and information fusion is proposed. This is the Omnibus model, which draws together each of the previous models and their associated advantages whilst managing to overcome some of the disadvantages. Where possible the terminology used within the Omnibus model is aimed at a general user of data fusion technology to allow use by a distributed audience.",
"title": ""
},
{
"docid": "f4239b2be54e80666bd21d8c50a6b1b0",
"text": "Limited work has examined how self-affirmation might lead to positive outcomes beyond the maintenance of a favorable self-image. To address this gap in the literature, we conducted two studies in two cultures to establish the benefits of self-affirmation for psychological well-being. In Study 1, South Korean participants who affirmed their values for 2 weeks showed increased eudaimonic well-being (need satisfaction, meaning, and flow) relative to control participants. In Study 2, U.S. participants performed a self-affirmation activity for 4 weeks. Extending Study 1, after 2 weeks, self-affirmation led both to increased eudaimonic well-being and hedonic well-being (affect balance). By 4 weeks, however, these effects were non-linear, and the increases in affect balance were only present for vulnerable participants-those initially low in eudaimonic well-being. In sum, the benefits of self-affirmation appear to extend beyond self-protection to include two types of well-being.",
"title": ""
},
{
"docid": "8c214f081f47e12d4dccd71b6038d3bf",
"text": "Switched reluctance machines (SRMs) are considered as serious candidates for starter/alternator (S/A) systems in more electric cars. Robust performance in the presence of high temperature, safe operation, offering high efficiency, and a very long constant power region, along with a rugged structure contribute to their suitability for this high impact application. To enhance these qualities, we have developed key technologies including sensorless operation over the entire speed range and closed-loop torque and speed regulation. The present paper offers an in-depth analysis of the drive dynamics during motoring and generating modes of operation. These findings will be used to explain our control strategies in the context of the S/A application. Experimental and simulation results are also demonstrated to validate the practicality of our claims.",
"title": ""
},
{
"docid": "cb561e56e60ba0e5eef2034158c544c2",
"text": "Android is a modern and popular software platform for smartphones. Among its predominant features is an advanced security model which is based on application-oriented mandatory access control and sandboxing. This allows developers and users to restrict the execution of an application to the privileges it has (mandatorily) assigned at installation time. The exploitation of vulnerabilities in program code is hence believed to be confined within the privilege boundaries of an application’s sandbox. However, in this paper we show that a privilege escalation attack is possible. We show that a genuine application exploited at runtime or a malicious application can escalate granted permissions. Our results immediately imply that Android’s security model cannot deal with a transitive permission usage attack and Android’s sandbox model fails as a last resort against malware and sophisticated runtime attacks.",
"title": ""
},
{
"docid": "dc610cdd3c6cc5ae443cf769bd139e78",
"text": "With modern smart phones and powerful mobile devices, Mobile apps provide many advantages to the community but it has also grown the demand for online availability and accessibility. Cloud computing is provided to be widely adopted for several applications in mobile devices. However, there are many advantages and disadvantages of using mobile applications and cloud computing. This paper focuses in providing an overview of mobile cloud computing advantages, disadvantages. The paper discusses the importance of mobile cloud applications and highlights the mobile cloud computing open challenges",
"title": ""
},
{
"docid": "16cac565c6163db83496c41ea98f61f9",
"text": "The rapid increase in multimedia data transmission over the Internet necessitates the multi-modal summarization (MMS) from collections of text, image, audio and video. In this work, we propose an extractive multi-modal summarization method that can automatically generate a textual summary given a set of documents, images, audios and videos related to a specific topic. The key idea is to bridge the semantic gaps between multi-modal content. For audio information, we design an approach to selectively use its transcription. For visual information, we learn the joint representations of text and images using a neural network. Finally, all of the multimodal aspects are considered to generate the textual summary by maximizing the salience, non-redundancy, readability and coverage through the budgeted optimization of submodular functions. We further introduce an MMS corpus in English and Chinese, which is released to the public1. The experimental results obtained on this dataset demonstrate that our method outperforms other competitive baseline methods.",
"title": ""
},
{
"docid": "fcfc16b94f06bf6120431a348e97b9ac",
"text": "Multi-label classification is a practical yet challenging task in machine learning related fields, since it requires the prediction of more than one label category for each input instance. We propose a novel deep neural networks (DNN) based model, Canonical Correlated AutoEncoder (C2AE), for solving this task. Aiming at better relating feature and label domain data for improved classification, we uniquely perform joint feature and label embedding by deriving a deep latent space, followed by the introduction of label-correlation sensitive loss function for recovering the predicted label outputs. Our C2AE is achieved by integrating the DNN architectures of canonical correlation analysis and autoencoder, which allows end-to-end learning and prediction with the ability to exploit label dependency. Moreover, our C2AE can be easily extended to address the learning problem with missing labels. Our experiments on multiple datasets with different scales confirm the effectiveness and robustness of our proposed method, which is shown to perform favorably against state-of-the-art methods for multi-label classification.",
"title": ""
}
] |
scidocsrr
|
f45440e73526700aa7fc7bca4a71b282
|
Understanding student learning trajectories using multimodal learning analytics within an embodied-interaction learning environment
|
[
{
"docid": "5b55b1c913aa9ec461c6c51c3d00b11b",
"text": "Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.",
"title": ""
},
{
"docid": "892c75c6b719deb961acfe8b67b982bb",
"text": "Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. Two research communities -- Educational Data Mining (EDM) and Learning Analytics and Knowledge (LAK) have developed separately to address this need. This paper argues for increased and formal communication and collaboration between these communities in order to share research, methods, and tools for data mining and analysis in the service of developing both LAK and EDM fields.",
"title": ""
}
] |
[
{
"docid": "382eb7a0e8bc572506a40bf3cbe6fd33",
"text": "The long-term ambition of the Tactile Internet is to enable a democratization of skill, and how it is being delivered globally. An integral part of this is to be able to transmit touch in perceived real-time, which is enabled by suitable robotics and haptics equipment at the edges, along with an unprecedented communications network. The fifth generation (5G) mobile communications systems will underpin this emerging Internet at the wireless edge. This paper presents the most important technology concepts, which lay at the intersection of the larger Tactile Internet and the emerging 5G systems. The paper outlines the key technical requirements and architectural approaches for the Tactile Internet, pertaining to wireless access protocols, radio resource management aspects, next generation core networking capabilities, edge-cloud, and edge-AI capabilities. The paper also highlights the economic impact of the Tactile Internet as well as a major shift in business models for the traditional telecommunications ecosystem.",
"title": ""
},
{
"docid": "3e5d887ff00e4eff8e408e6d51d747b2",
"text": "We present a small object sensitive method for object detection. Our method is built based on SSD (Single Shot MultiBox Detector (Liu et al. 2016)), a simple but effective deep neural network for image object detection. The discrete nature of anchor mechanism used in SSD, however, may cause misdetection for the small objects located at gaps between the anchor boxes. SSD performs better for small object detection after circular shifts of the input image. Therefore, auxiliary feature maps are generated by conducting circular shifts over lower extra feature maps in SSD for small-object detection, which is equivalent to shifting the objects in order to fit the locations of anchor boxes. We call our proposed system Shifted SSD. Moreover, pinpoint accuracy of localization is of vital importance to small objects detection. Hence, two novel methods called Smooth NMS and IoU-Prediction module are proposed to obtain more precise locations. Then for video sequences, we generate trajectory hypothesis to obtain predicted locations in a new frame for further improved performance. Experiments conducted on PASCAL VOC 2007, along with MS COCO, KITTI and our small object video datasets, validate that both mAP and recall are improved with different degrees and the speed is almost the same as SSD.",
"title": ""
},
{
"docid": "79fa1a6ec5490e80909b7cabc37e32aa",
"text": "Face identification and detection has become very popular, interesting and wide field of current research area. As there are several algorithms for face detection exist but none of the algorithms globally detect all sorts of human faces among the different colors and intensities in a given picture. In this paper, a novel method for face detection technique has been described. Here, the centers of both the eyes are detected using generic eye template matching method. After detecting the center of both the eyes, the corresponding face bounding box is determined. The experimental results have shown that the proposed algorithm is able to accomplish successfully proper detection and to mark the exact face and eye region in the given image.",
"title": ""
},
{
"docid": "f1a7bcd681969d5a5167d1b0397af13a",
"text": "The most data-efficient algorithms for reinforcement learning (RL) in robotics are based on uncertain dynamical models: after each episode, they first learn a dynamical model of the robot, then they use an optimization algorithm to find a policy that maximizes the expected return given the model and its uncertainties. It is often believed that this optimization can be tractable only if analytical, gradient-based algorithms are used; however, these algorithms require using specific families of reward functions and policies, which greatly limits the flexibility of the overall approach. In this paper, we introduce a novel model-based RL algorithm, called Black-DROPS (Black-box Data-efficient RObot Policy Search) that: (1) does not impose any constraint on the reward function or the policy (they are treated as black-boxes), (2) is as data-efficient as the state-of-the-art algorithm for data-efficient RL in robotics, and (3) is as fast (or faster) than analytical approaches when several cores are available. The key idea is to replace the gradient-based optimization algorithm with a parallel, black-box algorithm that takes into account the model uncertainties. We demonstrate the performance of our new algorithm on two standard control benchmark problems (in simulation) and a low-cost robotic manipulator (with a real robot).",
"title": ""
},
{
"docid": "57ca7842e7ab21b51c4069e76121fc26",
"text": "This paper surveys and investigates the strengths and weaknesses of a number of recent approaches to advanced workflow modelling. Rather than inventing just another workflow language, we briefly describe recent workflow languages, and we analyse them with respect to their support for advanced workflow topics. Object Coordination Nets, Workflow Graphs, WorkFlow Nets, and an approach based on Workflow Evolution are described as dedicated workflow modelling approaches. In addition, the Unified Modelling Language as the de facto standard in objectoriented modelling is also investigated. These approaches are discussed with respect to coverage of workflow perspectives and support for flexibility and analysis issues in workflow management, which are today seen as two major areas for advanced workflow support. Given the different goals and backgrounds of the approaches mentioned, it is not surprising that each approach has its specific strengths and weaknesses. We clearly identify these strengths and weaknesses, and we conclude with ideas for combining their best features.",
"title": ""
},
{
"docid": "9e3d3783aa566b50a0e56c71703da32b",
"text": "Heterogeneous networks are widely used to model real-world semi-structured data. The key challenge of learning over such networks is the modeling of node similarity under both network structures and contents. To deal with network structures, most existing works assume a given or enumerable set of meta-paths and then leverage them for the computation of meta-path-based proximities or network embeddings. However, expert knowledge for given meta-paths is not always available, and as the length of considered meta-paths increases, the number of possible paths grows exponentially, which makes the path searching process very costly. On the other hand, while there are often rich contents around network nodes, they have hardly been leveraged to further improve similarity modeling. In this work, to properly model node similarity in content-rich heterogeneous networks, we propose to automatically discover useful paths for pairs of nodes under both structural and content information. To this end, we combine continuous reinforcement learning and deep content embedding into a novel semi-supervised joint learning framework. Specifically, the supervised reinforcement learning component explores useful paths between a small set of example similar pairs of nodes, while the unsupervised deep embedding component captures node contents and enables inductive learning on the whole network. The two components are jointly trained in a closed loop to mutually enhance each other. Extensive experiments on three real-world heterogeneous networks demonstrate the supreme advantages of our algorithm.",
"title": ""
},
{
"docid": "6dbf49c714f6e176273317d4274b93de",
"text": "Categorical compositional distributional model of [9] sug gests a way to combine grammatical composition of the formal, type logi cal models with the corpus based, empirical word representations of distribut ional semantics. This paper contributes to the project by expanding the model to al so capture entailment relations. This is achieved by extending the representatio s of words from points in meaning space to density operators, which are probabilit y d stributions on the subspaces of the space. A symmetric measure of similarity an d an asymmetric measure of entailment is defined, where lexical entailment i s measured using von Neumann entropy, the quantum variant of Kullback-Leibl er divergence. Lexical entailment, combined with the composition map on wo rd representations, provides a method to obtain entailment relations on the leve l of sentences. Truth theoretic and corpus-based examples are provided.",
"title": ""
},
{
"docid": "17a1f03485b74ba0f1efd76e118e2b7a",
"text": "DISC Measure, Squeezer, Categorical Data Clustering, Cosine similarity References Rishi Sayal and Vijay Kumar. V. 2011. A novel Similarity Measure for Clustering Categorical Data Sets. International Journal of Computer Application (0975-8887). Aditya Desai, Himanshu Singh and Vikram Pudi. 2011. DISC Data-Intensive Similarity Measure for Categorical Data. Pacific-Asia Conferences on Knowledge Discovery Data Mining. Shyam Boriah, Varun Chandola and Vipin Kumar. 2008. Similarity Measure for Clustering Categorical Data. Comparative Evaluation. SIAM International Conference on Data Mining-SDM. Taoying Li, Yan Chen. 2009. Fuzzy Clustering Ensemble Algorithm for partitional Categorical Data. IEEE, International conference on Business Intelligence and Financial Engineering.",
"title": ""
},
{
"docid": "61fb62e6979789f5f465a41d46f62c57",
"text": "Previously, ANSI/IEEE relay current transformer (CT) sizing criteria were based on traditional symmetrical calculations that are usually discussed by technical articles and manufacturers' guidelines. In 1996, IEEE Standard C37.110-1996 introduced (1+X/R) offset multiplying, current asymmetry, and current distortion factors, officially changing the CT sizing guideline. A critical concern is the performance of fast protective schemes (instantaneous or differential elements) during severe saturation of low-ratio CTs. Will the instantaneous element operate before the upstream breaker relay trips? Will the differential element misoperate for out-of-zone faults? The use of electromagnetic and analog relay technology does not assure selectivity. Modern microprocessor relays introduce additional uncertainty into the design/verification process with different sampling techniques and proprietary sensing/recognition/trip algorithms. This paper discusses the application of standard CT accuracy classes with modern ANSI/IEEE CT calculation methodology. This paper is the first of a two-part series; Part II provides analytical waveform analysis discussions to illustrate the concepts conveyed in Part I",
"title": ""
},
{
"docid": "9c43da9473facdecda86442e157736db",
"text": "The soaring demand for intelligent mobile applications calls for deploying powerful deep neural networks (DNNs) on mobile devices. However, the outstanding performance of DNNs notoriously relies on increasingly complex models, which in turn is associated with an increase in computational expense far surpassing mobile devices’ capacity. What is worse, app service providers need to collect and utilize a large volume of users’ data, which contain sensitive information, to build the sophisticated DNN models. Directly deploying these models on public mobile devices presents prohibitive privacy risk. To benefit from the on-device deep learning without the capacity and privacy concerns, we design a private model compression framework RONA. Following the knowledge distillation paradigm, we jointly use hint learning, distillation learning, and self learning to train a compact and fast neural network. The knowledge distilled from the cumbersome model is adaptively bounded and carefully perturbed to enforce differential privacy. We further propose an elegant query sample selection method to reduce the number of queries and control the privacy loss. A series of empirical evaluations as well as the implementation on an Android mobile device show that RONA can not only compress cumbersome models efficiently but also provide a strong privacy guarantee. For example, on SVHN, when a meaningful (9.83, 10−6)-differential privacy is guaranteed, the compact model trained by RONA can obtain 20× compression ratio and 19× speed-up with merely 0.97% accuracy loss.",
"title": ""
},
{
"docid": "11de13e5347ee392b6535fe4b55eed24",
"text": "The requirement for new flexible adaptive grippers is the ability to detect and recognize objects in their environments. It is known that robotic manipulators are highly nonlinear systems, and an accurate mathematical model is difficult to obtain, thus making it difficult no control using conventional techniques. Here, a novel design of an adaptive neuro fuzzy inference strategy (ANFIS) for controlling input displacement of a new adaptive compliant gripper is presented. This design of the gripper has embedded sensors as part of its structure. The use of embedded sensors in a robot gripper gives the control system the ability to control input displacement of the gripper and to recognize particular shapes of the grasping objects. Since the conventional control strategy is a very challenging task, fuzzy logic based controllers are considered as potential candidates for such an application. Fuzzy based controllers develop a control signal which yields on the firing of the rule base. The selection of the proper rule base depending on the situation can be achieved by using an ANFIS controller, which becomes an integrated method of approach for the control purposes. In the designed ANFIS scheme, neural network techniques are used to select a proper rule base, which is achieved using the back propagation algorithm. The simulation results presented in this paper show the effectiveness of the developed method. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "351562a44f9126db2f48e2760e26af4e",
"text": "It has been widely observed that there is no single “dominant” SAT solver; instead, different solvers perform best on different instances. Rather than following the traditional approach of choosing the best solver for a given class of instances, we advocate making this decision online on a per-instance basis. Building on previous work, we describe SATzilla, an automated approach for constructing per-instance algorithm portfolios for SAT that use socalled empirical hardness models to choose among their constituent solvers. This approach takes as input a distribution of problem instances and a set of component solvers, and constructs a portfolio optimizing a given objective function (such as mean runtime, percent of instances solved, or score in a competition). The excellent performance of SATzilla was independently verified in the 2007 SAT Competition, where our SATzilla07 solvers won three gold, one silver and one bronze medal. In this article, we go well beyond SATzilla07 by making the portfolio construction scalable and completely automated, and improving it by integrating local search solvers as candidate solvers, by predicting performance score instead of runtime, and by using hierarchical hardness models that take into account different types of SAT instances. We demonstrate the effectiveness of these new techniques in extensive experimental results on data sets including instances from the most recent SAT competition.",
"title": ""
},
{
"docid": "95b2219dc34de9f0fe40e84e6df8a1e3",
"text": "Most computer vision and especially segmentation tasks require to extract features that represent local appearance of patches. Relevant features can be further processed by learning algorithms to infer posterior probabilities that pixels belong to an object of interest. Deep Convolutional Neural Networks (CNN) define a particularly successful class of learning algorithms for semantic segmentation, although they proved to be very slow to train even when employing special purpose hardware. We propose, for the first time, a general purpose segmentation algorithm to extract the most informative and interpretable features as convolution kernels while simultaneously building a multivariate decision tree. The algorithm trains several orders of magnitude faster than regular CNNs and achieves state of the art results in processing quality on benchmark datasets.",
"title": ""
},
{
"docid": "5673bc2ca9f08516f14485ef8bbba313",
"text": "Analog-to-digital converters are essential building blocks in modern electronic systems. They form the critical link between front-end analog transducers and back-end digital computers that can efficiently implement a wide variety of signal-processing functions. The wide variety of digitalsignal-processing applications leads to the availability of a wide variety of analog-to-digital (A/D) converters of varying price, performance, and quality. Ideally, an A/D converter encodes a continuous-time analog input voltage, VIN , into a series of discrete N -bit digital words that satisfy the relation",
"title": ""
},
{
"docid": "3ffe3cf44eb79a9560a873de774ecc67",
"text": "Gummy smile constitutes a relatively frequent aesthetic alteration characterized by excessive exhibition of the gums during smiling movements of the upper lip. It is the result of an inadequate relation between the lower edge of the upper lip, the positioning of the anterosuperior teeth, the location of the upper jaw, and the gingival margin position with respect to the dental crown. Altered Passive Eruption (APE) is a clinical situation produced by excessive gum overlapping over the enamel limits, resulting in a short clinical crown appearance, that gives the sensation of hidden teeth. The term itself suggests the causal mechanism, i.e., failure in the passive phase of dental eruption, though there is no scientific evidence to support this. While there are some authors who consider APE to be a risk situation for periodontal health, its clearest clinical implication refers to oral esthetics. APE is a factor that frequently contributes to the presence of a gummy or gingival smile, and it can easily be corrected by periodontal surgery. Nevertheless, it is essential to establish a correct differential diagnosis and good treatment plan. A literature review is presented of the dental eruption process, etiological hypotheses of APE, its morphologic classification, and its clinical relevance.",
"title": ""
},
{
"docid": "a39c0db041f31370135462af467426ed",
"text": "Part of the ventral temporal lobe is thought to be critical for face perception, but what determines this specialization remains unknown. We present evidence that expertise recruits the fusiform gyrus 'face area'. Functional magnetic resonance imaging (fMRI) was used to measure changes associated with increasing expertise in brain areas selected for their face preference. Acquisition of expertise with novel objects (greebles) led to increased activation in the right hemisphere face areas for matching of upright greebles as compared to matching inverted greebles. The same areas were also more activated in experts than in novices during passive viewing of greebles. Expertise seems to be one factor that leads to specialization in the face area.",
"title": ""
},
{
"docid": "0bce954374d27d4679eb7562350674fc",
"text": "Humanoid robotics is attracting the interest of many research groups world-wide. In particular, developing humanoids requires the implementation of manipulation capabilities, which is still a most complex problem in robotics. This paper presents an overview of current activities in the development of humanoid robots, with special focus on manipulation. Then we discuss our current approach to the design and development of anthropomorphic sensorized hand and of anthropomorphic control and sensory-motor coordination schemes. Current achievements in the development of a robotic human hand prosthesis are described, together with preliminary experimental results, as well as in the implementation of biologically-inspired schemes for control and sensory-motor co-ordination in manipulation, derived from models of well-identified human brain areas.",
"title": ""
},
{
"docid": "1e852e116c11a6c7fb1067313b1ffaa3",
"text": "Article history: Received 20 February 2013 Received in revised form 30 July 2013 Accepted 11 September 2013 Available online 21 September 2013",
"title": ""
},
{
"docid": "b2a2fdf56a79c1cb82b8b3a55b9d841d",
"text": "This paper describes the architecture and implementation of a shortest path processor, both in reconfigurable hardware and VLSI. This processor is based on the principles of recurrent spatiotemporal neural network. The processor’s operation is similar to Dijkstra’s algorithm and it can be used for network routing calculations. The objective of the processor is to find the least cost path in a weighted graph between a given node and one or more destinations. The digital implementation exhibits a regular interconnect structure and uses simple processing elements, which is well suited for VLSI implementation and reconfigurable hardware.",
"title": ""
}
] |
scidocsrr
|
81099c920db32cea29cfb49c4efe9cd7
|
The effect of Gamified mHealth App on Exercise Motivation and Physical Activity
|
[
{
"docid": "05d9a8471939217983c1e47525ff595e",
"text": "BACKGROUND\nMobile phone health apps may now seem to be ubiquitous, yet much remains unknown with regard to their usage. Information is limited with regard to important metrics, including the percentage of the population that uses health apps, reasons for adoption/nonadoption, and reasons for noncontinuance of use.\n\n\nOBJECTIVE\nThe purpose of this study was to examine health app use among mobile phone owners in the United States.\n\n\nMETHODS\nWe conducted a cross-sectional survey of 1604 mobile phone users throughout the United States. The 36-item survey assessed sociodemographic characteristics, history of and reasons for health app use/nonuse, perceived effectiveness of health apps, reasons for stopping use, and general health status.\n\n\nRESULTS\nA little over half (934/1604, 58.23%) of mobile phone users had downloaded a health-related mobile app. Fitness and nutrition were the most common categories of health apps used, with most respondents using them at least daily. Common reasons for not having downloaded apps were lack of interest, cost, and concern about apps collecting their data. Individuals more likely to use health apps tended to be younger, have higher incomes, be more educated, be Latino/Hispanic, and have a body mass index (BMI) in the obese range (all P<.05). Cost was a significant concern among respondents, with a large proportion indicating that they would not pay anything for a health app. Interestingly, among those who had downloaded health apps, trust in their accuracy and data safety was quite high, and most felt that the apps had improved their health. About half of the respondents (427/934, 45.7%) had stopped using some health apps, primarily due to high data entry burden, loss of interest, and hidden costs.\n\n\nCONCLUSIONS\nThese findings suggest that while many individuals use health apps, a substantial proportion of the population does not, and that even among those who use health apps, many stop using them. These data suggest that app developers need to better address consumer concerns, such as cost and high data entry burden, and that clinical trials are necessary to test the efficacy of health apps to broaden their appeal and adoption.",
"title": ""
},
{
"docid": "5e7a06213a32e0265dcb8bc11a5bb3f1",
"text": "The global obesity epidemic has prompted our community to explore the potential for technology to play a stronger role in promoting healthier lifestyles. Although there are several examples of successful games based on focused physical interaction, persuasive applications that integrate into everyday life have had more mixed results. This underscores a need for designs that encourage physical activity while addressing fun, sustainability, and behavioral change. This note suggests a new perspective, inspired in part by the social nature of many everyday fitness applications and by the successful encouragement of long term play in massively multiplayer online games. We first examine the game design literature to distill a set of principles for discussing and comparing applications. We then use these principles to analyze an existing application. Finally, we present Kukini, a design for an everyday fitness game.",
"title": ""
},
{
"docid": "be08b71c9af0e27f4f932919c2aaa24b",
"text": "Gamification is the \"use of game design elements in non-game contexts\" (Deterding et al, 2011, p.1). A frequently used model for gamification is to equate an activity in the non-game context with points and have external rewards for reaching specified point thresholds. One significant problem with this model of gamification is that it can reduce the internal motivation that the user has for the activity, as it replaces internal motivation with external motivation. If, however, the game design elements can be made meaningful to the user through information, then internal motivation can be improved as there is less need to emphasize external rewards. This paper introduces the concept of meaningful gamification through a user-centered exploration of theories behind organismic integration theory, situational relevance, situated motivational affordance, universal design for learning, and player-generated content. A Brief Introduction to Gamification One definition of gamification is \"the use of game design elements in non-game contexts\" (Deterding et al, 2011, p.1). A common implementation of gamification is to take the scoring elements of video games, such as points, levels, and achievements, and apply them to a work or educational context. While the term is relatively new, the concept has been around for some time through loyalty systems like frequent flyer miles, green stamps, and library summer reading programs. These gamification programs can increase the use of a service and change behavior, as users work toward meeting these goals to reach external rewards (Zichermann & Cunningham, 2011, p. 27). Gamification has met with significant criticism by those who study games. One problem is with the name. By putting the term \"game\" first, it implies that the entire activity will become an engaging experience, when, in reality, gamification typically uses only the least interesting part of a game the scoring system. The term \"pointsification\" has been suggested as a label for gamification systems that add nothing more than a scoring system to a non-game activity (Robertson, 2010). One definition of games is \"a form of play with goals and structure\" (Maroney, 2001); the points-based gamification focuses on the goals and leaves the play behind. Ian Bogost suggests the term be changed to \"exploitationware,\" as that is a better description of what is really going on (2011). The underlying message of these criticisms of gamification is that there are more effective ways than a scoring system to engage users. Another concern is that organizations getting involved with gamification are not aware of the potential long-term negative impact of gamification. Underlying the concept of gamification is motivation. People can be driven to do something because of internal or external motivation. A meta-analysis by Deci, Koestner, and Ryan of 128 studies that examined motivation in educational settings found that almost all forms of rewards (except for non-controlling verbal rewards) reduced internal motivation (2001). The implication of this is that once gamification is used to provide external motivation, the user's internal motivation decreases. If the organization starts using gamification based upon external rewards and then decides to stop the rewards program, that organization will be worse off than when it started as users will be less likely to return to the behavior without the external reward (Deci, Koestner & Ryan, 2001). In the book Gamification by Design, the authors claim that this belief in internal motivation over extrinsic rewards is unfounded, and gamification can be used for organizations to control the behavior of users by replacing those internal motivations with extrinsic rewards. They do admit, though, that \"once you start giving someone a reward, you have to keep her in that reward loop forever\" (Zichermann & Cunningham, 2011, p. 27). Preprint of: Nicholson, S. (2012, June). A User-Centered Theoretical Framework for Meaningful Gamification. Paper Presented at Games+Learning+Society 8.0, Madison, WI. Further exploration of the meta-analysis of motivational literature in education found that if the task was already uninteresting, reward systems did not reduce internal motivation, as there was little internal motivation to start with. The authors concluded that \"the issue is how to facilitate people's understanding the importance of the activity to themselves and thus internalizing its regulation so they will be selfmotivated to perform it\" (2001, p. 15). The goal of this paper is to explore theories useful in user-centered gamification that is meaningful to the user and therefore does not depend upon external rewards. Organismic Integration Theory Organismic Integration Theory (OIT) is a sub-theory of self-determination theory out of the field of Education created by Deci and Ryan (2004). Self-determination theory is focused on what drives an individual to make choices without external influence. OIT explores how different types of external motivations can be integrated with the underlying activity into someone’s own sense of self. Rather than state that motivations are either internalized or not, this theory presents a continuum based upon how much external control is integrated along with the desire to perform the activity. If there is heavy external control provided with a reward, then aspects of that external control will be internalized as well, while if there is less external control that goes along with the adaptation of an activity, then the activity will be more self-regulated. External rewards unrelated to the activity are the least likely to be integrated, as the perception is that someone else is controlling the individual’s behavior. Rewards based upon gaining or losing status that tap into the ego create an introjected regulation of behavior, and while this can be intrinsically accepted, the controlling aspect of these rewards causes the loss of internal motivation. Allowing users to selfidentify with goals or groups that are meaningful is much more likely to produce autonomous, internalized behaviors, as the user is able to connect these goals to other values he or she already holds. A user who has fully integrated the activity along with his or her personal goals and needs is more likely to see the activity as positive than if there is external control integrated with the activity (Deci & Ryan, 2004). OIT speaks to the importance of creating a gamification system that is meaningful to the user, assuming that the goal of the system is to create long-term systemic change where the users feel positive about engaging in the non-game activity. On the other side, if too many external controls are integrated with the activity, the user can have negative feelings about engaging in the activity. To avoid negative feelings, the game-based elements of the activity need to be meaningful and rewarding without the need for external rewards. In order for these activities to be meaningful to a specific user, however, they have to be relevant to that user. Situational Relevance and Situated Motivational Affordance One of the key research areas in Library and Information Science has been about the concept of relevance as related to information retrieval. A user has an information need, and a relevant document is one that resolves some of that information need. The concept of relevance is important in determining the effectiveness of search tools and algorithms. Many research projects that have compared search tools looked at the same query posed to different systems, and then used judges to determine what was a \"relevant\" response to that query. This approach has been heavily critiqued, as there are many variables that affect if a user finds something relevant at that moment in his or her searching process. Schamber reviewed decades of research to find generalizable criteria that could be used to determine what is truly relevant to a query and came to the conclusion that the only way to know if something is relevant is to ask the user (1994). Two users with the same search query will have different information backgrounds, so that a document that is relevant for one user may not be relevant to another user. This concept of \"situational relevance\" is important when thinking about gamification. When someone else creates goals for a user, it is akin to an external judge deciding what is relevant to a query. Without involving the user, there is no way to know what goals are relevant to a user's background, interest, or needs. In a points-based gamification system, the goal of scoring points is less likely to be relevant to a user if the activity that the points measure is not relevant to that user. For example, in a hybrid automobile, the gamification systems revolve around conservation and the point system can reflect how much energy is being saved. If the concept of saving energy is relevant to a user, then a point system Preprint of: Nicholson, S. (2012, June). A User-Centered Theoretical Framework for Meaningful Gamification. Paper Presented at Games+Learning+Society 8.0, Madison, WI. based upon that concept will also be relevant to that user. If the user is not internally concerned with saving energy, then a gamification system based upon saving energy will not be relevant to that user. There may be other elements of the driving experience that are of interest to a user, so if each user can select what aspect of the driving experience is measured, more users will find the system to be relevant. By involving the user in the creation or customization of the gamification system, the user can select or create meaningful game elements and goals that fall in line with their own interests. A related theory out of Human-Computer Interaction that has been applied to gamification is “situated motivational affordance” (Deterding, 2011b). This model was designed to help gamification designers consider the context of each o",
"title": ""
}
] |
[
{
"docid": "0c6b6575616ad22dab5bac9c25907d36",
"text": "Identifying students’ learning styles has several benefits such as making students aware of their strengths and weaknesses when it comes to learning and the possibility to personalize their learning environment to their learning styles. While there exist learning style questionnaires for identifying a student’s learning style, such questionnaires have several disadvantages and therefore, research has been conducted on automatically identifying learning styles from students’ behavior in a learning environment. Current approaches to automatically identify learning styles have an average precision between 66% and 77%, which shows the need for improvements in order to use such automatic approaches reliably in learning environments. In this paper, four computational intelligence algorithms (artificial neural network, genetic algorithm, ant colony system and particle swarm optimization) have been investigated with respect to their potential to improve the precision of automatic learning style identification. Each algorithm was evaluated with data from 75 students. The artificial neural network shows the most promising results with an average precision of 80.7%, followed by particle swarm optimization with an average precision of 79.1%. Improving the precision of automatic learning style identification allows more students to benefit from more accurate information about their learning styles as well as more accurate personalization towards accommodating their learning styles in a learning environment. Furthermore, teachers can have a better understanding of their students and be able to provide more appropriate interventions.",
"title": ""
},
{
"docid": "8694f84e4e2bd7da1e678a3b38ccd447",
"text": "This paper describes a general methodology for extracting attribute-value pairs from web pages. It consists of two phases: candidate generation, in which syntactically likely attribute-value pairs are annotated; and candidate filtering, in which semantically improbable annotations are removed. We describe three types of candidate generators and two types of candidate filters, all of which are designed to be massively parallelizable. Our methods can handle 1 billion web pages in less than 6 hours with 1,000 machines. The best generator and filter combination achieves 70% F-measure compared to a hand-annotated corpus.",
"title": ""
},
{
"docid": "0c3b25e74497fb2b76c9943b1237979b",
"text": "Massive (Multiple-Input–Multiple-Output) is a wireless technology which aims to serve several different devices simultaneously in the same frequency band through spatial multiplexing, made possible by using a large number of antennas at the base station. e many antennas facilitates efficient beamforming, based on channel estimates acquired from uplink reference signals, which allows the base station to transmit signals exactly where they are needed. e multiplexing together with the array gain from the beamforming can increase the spectral efficiency over contemporary systems. One challenge of practical importance is how to transmit data in the downlink when no channel state information is available. When a device initially joins the network, prior to transmiing uplink reference signals that enable beamforming, it needs system information—instructions on how to properly function within the network. It is transmission of system information that is the main focus of this thesis. In particular, the thesis analyzes how the reliability of the transmission of system information depends on the available amount of diversity. It is shown how downlink reference signals, space-time block codes, and power allocation can be used to improve the reliability of this transmission. In order to estimate the uplink and downlink channels from uplink reference signals, which is imperative to ensure scalability in the number of base station antennas, massive relies on channel reciprocity. is thesis shows that the principles of channel reciprocity can also be exploited by a jammer, a malicious transmier, aiming to disrupt legitimate communication between two devices. A heuristic scheme is proposed in which the jammer estimates the channel to a target device blindly, without any knowledge of the transmied legitimate signals, and subsequently beamforms noise towards the target. Under the same power constraint, the proposed jammer can disrupt the legitimate link more effectively than a conventional omnidirectional jammer in many cases.",
"title": ""
},
{
"docid": "17ba29c670e744d6e4f9e93ceb109410",
"text": "This paper presents a novel online video recommendation system called VideoReach, which alleviates users' efforts on finding the most relevant videos according to current viewings without a sufficient collection of user profiles as required in traditional recommenders. In this system, video recommendation is formulated as finding a list of relevant videos in terms of multimodal relevance (i.e. textual, visual, and aural relevance) and user click-through. Since different videos have different intra-weights of relevance within an individual modality and inter-weights among different modalities, we adopt relevance feedback to automatically find optimal weights by user click-though, as well as an attention fusion function to fuse multimodal relevance. We use 20 clips as the representative test videos, which are searched by top 10 queries from more than 13k online videos, and report superior performance compared with an existing video site.",
"title": ""
},
{
"docid": "41ef29542308363b180aa7685330b905",
"text": "We conducted a literature review on systems that track learning analytics data (e.g., resource use, time spent, assessment data, etc.) and provide a report back to students in the form of visualizations, feedback, or recommendations. This review included a rigorous article search process; 945 articles were identified in the initial search. After filtering out articles that did not meet the inclusion criteria, 94 articles were included in the final analysis. Articles were coded on five categories chosen based on previous work done in this area: functionality, data sources, design analysis, perceived effects, and actual effects. The purpose of this review is to identify trends in the current student-facing learning analytics reporting system literature and provide recommendations for learning analytics researchers and practitioners for future work.",
"title": ""
},
{
"docid": "6dddd252eec80ec4f3535a82e25809cf",
"text": "The design and construction of truly humanoid robots that can perceive and interact with the environment depends significantly on their perception capabilities. In this paper we present the Karlsruhe Humanoid Head, which has been designed to be used both as part of our humanoid robots ARMAR-IIIa and ARMAR-IIIb and as a stand-alone robot head for studying various visual perception tasks in the context of object recognition and human-robot interaction. The head has seven degrees of freedom (DoF). The eyes have a common tilt and can pan independently. Each eye is equipped with two digital color cameras, one with a wide-angle lens for peripheral vision and one with a narrow-angle lens for foveal vision to allow simple visuo-motor behaviors. Among these are tracking and saccadic motions towards salient regions, as well as more complex visual tasks such as hand-eye coordination. We present the mechatronic design concept, the motor control system, the sensor system and the computational system. To demonstrate the capabilities of the head, we present accuracy test results, and the implementation of both open-loop and closed-loop control on the head.",
"title": ""
},
{
"docid": "5d0cdaf761922ef5caab3b00986ba87c",
"text": "OBJECTIVE\nWe have previously reported an automated method for within-modality (e.g., PET-to-PET) image alignment. We now describe modifications to this method that allow for cross-modality registration of MRI and PET brain images obtained from a single subject.\n\n\nMETHODS\nThis method does not require fiducial markers and the user is not required to identify common structures on the two image sets. To align the images, the algorithm seeks to minimize the standard deviation of the PET pixel values that correspond to each MRI pixel value. The MR images must be edited to exclude nonbrain regions prior to using the algorithm.\n\n\nRESULTS AND CONCLUSION\nThe method has been validated quantitatively using data from patients with stereotaxic fiducial markers rigidly fixed in the skull. Maximal three-dimensional errors of < 3 mm and mean three-dimensional errors of < 2 mm were measured. Computation time on a SPARCstation IPX varies from 3 to 9 min to align MR image sets with [18F]fluorodeoxyglucose PET images. The MR alignment with noisy H2(15)O PET images typically requires 20-30 min.",
"title": ""
},
{
"docid": "7a4bb28ae7c175a018b278653e32c3a1",
"text": "Additive manufacturing (AM) alias 3D printing translates computer-aided design (CAD) virtual 3D models into physical objects. By digital slicing of CAD, 3D scan, or tomography data, AM builds objects layer by layer without the need for molds or machining. AM enables decentralized fabrication of customized objects on demand by exploiting digital information storage and retrieval via the Internet. The ongoing transition from rapid prototyping to rapid manufacturing prompts new challenges for mechanical engineers and materials scientists alike. Because polymers are by far the most utilized class of materials for AM, this Review focuses on polymer processing and the development of polymers and advanced polymer systems specifically for AM. AM techniques covered include vat photopolymerization (stereolithography), powder bed fusion (SLS), material and binder jetting (inkjet and aerosol 3D printing), sheet lamination (LOM), extrusion (FDM, 3D dispensing, 3D fiber deposition, and 3D plotting), and 3D bioprinting. The range of polymers used in AM encompasses thermoplastics, thermosets, elastomers, hydrogels, functional polymers, polymer blends, composites, and biological systems. Aspects of polymer design, additives, and processing parameters as they relate to enhancing build speed and improving accuracy, functionality, surface finish, stability, mechanical properties, and porosity are addressed. Selected applications demonstrate how polymer-based AM is being exploited in lightweight engineering, architecture, food processing, optics, energy technology, dentistry, drug delivery, and personalized medicine. Unparalleled by metals and ceramics, polymer-based AM plays a key role in the emerging AM of advanced multifunctional and multimaterial systems including living biological systems as well as life-like synthetic systems.",
"title": ""
},
{
"docid": "8e5a0b0310fc77b5ca618c5b7e924d64",
"text": "Network analysis has an increasing role in our effort to understand the complexity of biological systems. This is because of our ability to generate large data sets, where the interaction or distance between biological components can be either measured experimentally or calculated. Here we describe the use of BioLayout Express3D, an application that has been specifically designed for the integration, visualization and analysis of large network graphs derived from biological data. We describe the basic functionality of the program and its ability to display and cluster large graphs in two- and three-dimensional space, thereby rendering graphs in a highly interactive format. Although the program supports the import and display of various data formats, we provide a detailed protocol for one of its unique capabilities, the network analysis of gene expression data and a more general guide to the manipulation of graphs generated from various other data types.",
"title": ""
},
{
"docid": "921b4ecaed69d7396285909bd53a3790",
"text": "Brain mapping transforms the brain cortical surface to canonical planar domains, which plays a fundamental role in morphological study. Most existing brain mapping methods are based on angle preserving maps, which may introduce large area distortions. This work proposes an area preserving brain mapping method based on Monge-Brenier theory. The brain mapping is intrinsic to the Riemannian metric, unique, and diffeomorphic. The computation is equivalent to convex energy minimization and power Voronoi diagram construction. Comparing to the existing approaches based on Monge-Kantorovich theory, the proposed one greatly reduces the complexity (from n2 unknowns to n ), and improves the simplicity and efficiency. Experimental results on caudate nucleus surface mapping and cortical surface mapping demonstrate the efficacy and efficiency of the proposed method. Conventional methods for caudate nucleus surface mapping may suffer from numerical instability, in contrast, current method produces diffeomorpic mappings stably. In the study of cortical surface classification for recognition of Alzheimer's Disease, the proposed method outperforms some other morphometry features.",
"title": ""
},
{
"docid": "4d0889329f9011adc05484382e4f5dc0",
"text": "A high level of sustained personal plaque control is fundamental for successful treatment outcomes in patients with active periodontal disease and, hence, oral hygiene instructions are the cornerstone of periodontal treatment planning. Other risk factors for periodontal disease also should be identified and modified where possible. Many restorative dental treatments in particular require the establishment of healthy periodontal tissues for their clinical success. Failure by patients to control dental plaque because of inappropriate designs and materials for restorations and prostheses will result in the long-term failure of the restorations and the loss of supporting tissues. Periodontal treatment planning considerations are also very relevant to endodontic, orthodontic and osseointegrated dental implant conditions and proposed therapies.",
"title": ""
},
{
"docid": "1705ba479a7ff33eef46e0102d4d4dd0",
"text": "Knowing the user’s point of gaze has significant potential to enhance current human-computer interfaces, given that eye movements can be used as an indicator of the attentional state of a user. The primary obstacle of integrating eye movements into today’s interfaces is the availability of a reliable, low-cost open-source eye-tracking system. Towards making such a system available to interface designers, we have developed a hybrid eye-tracking algorithm that integrates feature-based and model-based approaches and made it available in an open-source package. We refer to this algorithm as \"starburst\" because of the novel way in which pupil features are detected. This starburst algorithm is more accurate than pure feature-based approaches yet is signi?cantly less time consuming than pure modelbased approaches. The current implementation is tailored to tracking eye movements in infrared video obtained from an inexpensive head-mounted eye-tracking system. A validation study was conducted and showed that the technique can reliably estimate eye position with an accuracy of approximately one degree of visual angle.",
"title": ""
},
{
"docid": "c52c6c70ffda274af6a32ed5d1316f08",
"text": "Markov decision processes (MDPs) are powerful tools for decision making in uncertain dynamic environments. However, the solutions of MDPs are of limited practical use due to their sensitivity to distributional model parameters, which are typically unknown and have to be estimated by the decision maker. To counter the detrimental effects of estimation errors, we consider robust MDPs that offer probabilistic guarantees in view of the unknown parameters. To this end, we assume that an observation history of the MDP is available. Based on this history, we derive a confidence region that contains the unknown parameters with a pre-specified probability 1− β. Afterwards, we determine a policy that attains the highest worst-case performance over this confidence region. By construction, this policy achieves or exceeds its worst-case performance with a confidence of at least 1 − β. Our method involves the solution of tractable conic programs of moderate size. Notation For a finite set X = {1, . . . , X}, M(X ) denotes the probability simplex in R . An X -valued random variable χ has distribution m ∈ M(X ), denoted by χ ∼ m, if P(χ = x) = mx for all x ∈ X . By default, all vectors are column vectors. We denote by ek the kth canonical basis vector, while e denotes the vector whose components are all ones. In both cases, the dimension will usually be clear from the context. For square matrices A and B, the relation A B indicates that the matrix A − B is positive semidefinite. We denote the space of symmetric n × n matrices by S. The declaration f : X c 7→ Y (f : X a 7→ Y ) implies that f is a continuous (affine) function from X to Y . For a matrix A, we denote its ith row by Ai· (a row vector) and its jth column by A·j .",
"title": ""
},
{
"docid": "3afea784f4a9eb635d444a503266d7cd",
"text": "Gallium nitride high-electron mobility transistors (GaN HEMTs) have attractive properties, low on-resistances and fast switching speeds. This paper presents the characteristics of a normally-on GaN HEMT that we fabricated. Further, the circuit operation of a Class-E amplifier is analyzed. Experimental results demonstrate the excellent performance of the gate drive circuit for the normally-on GaN HEMT and the 13.56MHz radio frequency (RF) power amplifier.",
"title": ""
},
{
"docid": "7218d7f8fb8791ab35e878eb61ea92e7",
"text": "We present a novel approach for vision-based road direction detection for autonomous Unmanned Ground Vehicles (UGVs). The proposed method utilizes only monocular vision information similar to human perception to detect road directions with respect to the vehicle. The algorithm searches for a global feature of the roads due to perspective projection (so-called vanishing point) to distinguish road directions. The proposed approach consists of two stages. The first stage estimates the vanishing-point locations from single frames. The second stage uses a Rao-Blackwellised particle filter to track initial vanishing-point estimations over a sequence of images in order to provide more robust estimation. Simultaneously, the direction of the road ahead of the vehicle is predicted, which is prerequisite information for vehicle steering and path planning. The proposed approach assumes minimum prior knowledge about the environment and can cope with complex situations such as ground cover variations, different illuminations, and cast shadows. Its performance is evaluated on video sequences taken during test run of the DARPA Grand Challenge.",
"title": ""
},
{
"docid": "d2304dae0f99bf5e5b46d4ceb12c0d69",
"text": "The ultimate goal of this indoor mapping research is to automatically reconstruct a floorplan simply by walking through a house with a smartphone in a pocket. This paper tackles this problem by proposing FloorNet, a novel deep neural architecture. The challenge lies in the processing of RGBD streams spanning a large 3D space. FloorNet effectively processes the data through three neural network branches: 1) PointNet with 3D points, exploiting the 3D information; 2) CNN with a 2D point density image in a top-down view, enhancing the local spatial reasoning; and 3) CNN with RGB images, utilizing the full image information. FloorNet exchanges intermediate features across the branches to exploit the best of all the architectures. We have created a benchmark for floorplan reconstruction by acquiring RGBD video streams for 155 residential houses or apartments with Google Tango phones and annotating complete floorplan information. Our qualitative and quantitative evaluations demonstrate that the fusion of three branches effectively improves the reconstruction quality. We hope that the paper together with the benchmark will be an important step towards solving a challenging vector-graphics reconstruction problem. Code and data are available at https://github.com/art-programmer/FloorNet.",
"title": ""
},
{
"docid": "90e02beb3d51c5d3715e3baab3056561",
"text": "¶ Despite the widespread popularity of online opinion forums among consumers, the business value that such systems bring to organizations has, so far, remained an unanswered question. This paper addresses this question by studying the value of online movie ratings in forecasting motion picture revenues. First, we conduct a survey where a nationally representative sample of subjects who do not rate movies online is asked to rate a number of recent movies. Their ratings exhibit high correlation with online ratings for the same movies. We thus provide evidence for the claim that online ratings can be considered as a useful proxy for word-of-mouth about movies. Inspired by the Bass model of product diffusion, we then develop a motion picture revenue-forecasting model that incorporates the impact of both publicity and word of mouth on a movie's revenue trajectory. Using our model, we derive notably accurate predictions of a movie's total revenues from statistics of user reviews posted on Yahoo! Movies during the first week of a new movie's release. The results of our work provide encouraging evidence for the value of publicly available online forum information to firms for real-time forecasting and competitive analysis. ¶ This is a preliminary draft of a work in progress. It is being distributed to seminar participants for comments and discussion.",
"title": ""
},
{
"docid": "68c7509ec0261b1ddccef7e3ad855629",
"text": "This research comprehensively illustrates the design, implementation and evaluation of a novel marker less environment tracking technology for an augmented reality based indoor navigation application, adapted to efficiently operate on a proprietary head-mounted display. Although the display device used, Google Glass, had certain pitfalls such as short battery life, slow processing speed, and lower quality visual display but the tracking technology was able to complement these limitations by rendering a very efficient, precise, and intuitive navigation experience. The performance assessments, conducted on the basis of efficiency and accuracy, substantiated the utility of the device for everyday navigation scenarios, whereas a later conducted subjective evaluation of handheld and wearable devices also corroborated the wearable as the preferred device for indoor navigation.",
"title": ""
},
{
"docid": "a1f930147ad3c3ef48b6352e83d645d0",
"text": "Database applications such as online transaction processing (OLTP) and decision support systems (DSS) constitute the largest and fastest-growing segment of the market for multiprocessor servers. However, most current system designs have been optimized to perform well on scientific and engineering workloads. Given the radically different behavior of database workloads (especially OLTP), it is important to re-evaluate key system design decisions in the context of this important class of applications.This paper examines the behavior of database workloads on shared-memory multiprocessors with aggressive out-of-order processors, and considers simple optimizations that can provide further performance improvements. Our study is based on detailed simulations of the Oracle commercial database engine. The results show that the combination of out-of-order execution and multiple instruction issue is indeed effective in improving performance of database workloads, providing gains of 1.5 and 2.6 times over an in-order single-issue processor for OLTP and DSS, respectively. In addition, speculative techniques enable optimized implementations of memory consistency models that significantly improve the performance of stricter consistency models, bringing the performance to within 10--15% of the performance of more relaxed models.The second part of our study focuses on the more challenging OLTP workload. We show that an instruction stream buffer is effective in reducing the remaining instruction stalls in OLTP, providing a 17% reduction in execution time (approaching a perfect instruction cache to within 15%). Furthermore, our characterization shows that a large fraction of the data communication misses in OLTP exhibit migratory behavior; our preliminary results show that software prefetch and writeback/flush hints can be used for this data to further reduce execution time by 12%.",
"title": ""
},
{
"docid": "2130cc3df3443c912d9a38f83a51ab14",
"text": "Event cameras, such as dynamic vision sensors (DVS), and dynamic and activepixel vision sensors (DAVIS) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor (APS) images and DVS temporal contrast events. The APS stream is a sequence of standard grayscale global-shutter image sensor frames. The DVS events represent brightness changes occurring at a particular moment, with a jitter of about a millisecond under most lighting conditions. They have a dynamic range of >120 dB and effective frame rates >1 kHz at data rates comparable to 30 fps (frames/second) image sensors. To overcome some of the limitations of current image acquisition technology, we investigate in this work the use of the combined DVS and APS streams in endto-end driving applications. The dataset DDD17 accompanying this paper is the first open dataset of annotated DAVIS driving recordings. DDD17 has over 12 h of a 346x260 pixel DAVIS sensor recording highway and city driving in daytime, evening, night, dry and wet weather conditions, along with vehicle speed, GPS position, driver steering, throttle, and brake captured from the car’s on-board diagnostics interface. As an example application, we performed a preliminary end-toend learning study of using a convolutional neural network that is trained to predict the instantaneous steering angle from DVS and APS visual data.",
"title": ""
}
] |
scidocsrr
|
8c6fc852e3da449c0d2023434f4e7e03
|
Improving Neural Network Quantization without Retraining using Outlier Channel Splitting
|
[
{
"docid": "54d3d5707e50b979688f7f030770611d",
"text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.",
"title": ""
},
{
"docid": "5dca1e55bd6475ff352db61580dec807",
"text": "Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as “WAGE” to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.",
"title": ""
},
{
"docid": "6fc6167d1ef6b96d239fea03b9653865",
"text": "Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. In order to reduce this cost, several quantization schemes have gained attention recently with some focusing on weight quantization, and others focusing on quantizing activations. This paper proposes novel techniques that target weight and activation quantizations separately resulting in an overall quantized neural network (QNN). The activation quantization technique, PArameterized Clipping acTivation (PACT), uses an activation clipping parameter α that is optimized during training to find the right quantization scale. The weight quantization scheme, statistics-aware weight binning (SAWB), finds the optimal scaling factor that minimizes the quantization error based on the statistical characteristics of the distribution of weights without the need for an exhaustive search. The combination of PACT and SAWB results in a 2-bit QNN that achieves state-of-the-art classification accuracy (comparable to full precision networks) across a range of popular models and datasets.",
"title": ""
}
] |
[
{
"docid": "d35623e1c73a30c2879a1750df295246",
"text": "Online human textual interaction often carries important emotional meanings inaccessible to computers. We propose an approach to textual emotion recognition in the context of computer-mediated communication. The proposed recognition approach works at the sentence level and uses the standard Ekman emotion classification. It is grounded in a refined keyword-spotting method that employs: a WordNet-based word lexicon, a lexicon of emoticons, common abbreviations and colloquialisms, and a set of heuristic rules. The approach is implemented through the Synesketch software system. Synesketch is published as a free, open source software library. Several Synesketch-based applications presented in the paper, such as the the emotional visual chat, stress the practical value of the approach. Finally, the evaluation of the proposed emotion recognition algorithm shows high accuracy and promising results for future research and applications.",
"title": ""
},
{
"docid": "129e01910a1798c69d01d0642a4f6bf4",
"text": "We show that Tobin's q, as proxied by the ratio of the firm's market value to its book value, increases with the firm's systematic equity risk and falls with the firm's unsystematic equity risk. Further, an increase in the firm's total equity risk is associated with a fall in q. The negative relation between the change in total risk and the change in q is robust through time for the whole sample, but it does not hold for the largest firms.",
"title": ""
},
{
"docid": "5cc3d79d7bd762e8cfd9df658acae3fc",
"text": "With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop safety software for use by the AI research community. Building on our previous work on AI Containment Problem we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software for intelligent programs of all levels. Such safety container software will make it possible to study and analyze intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering attacks and cyberattacks from within the container.",
"title": ""
},
{
"docid": "28ba4e921cb942c8022c315561abf526",
"text": "Metamaterials have attracted more and more research attentions recently. Metamaterials for electromagnetic applications consist of sub-wavelength structures designed to exhibit particular responses to an incident EM (electromagnetic) wave. Traditional EM (electromagnetic) metamaterial is constructed from thick and rigid structures, with the form-factor suitable for applications only in higher frequencies (above GHz) in microwave band. In this paper, we developed a thin and flexible metamaterial structure with small-scale unit cell that gives EM metamaterials far greater flexibility in numerous applications. By incorporating ferrite materials, the thickness and size of the unit cell of metamaterials have been effectively scaled down. The design, mechanism and development of flexible ferrite loaded metamaterials for microwave applications is described, with simulation as well as measurements. Experiments show that the ferrite film with permeability of 10 could reduce the resonant frequency. The thickness of the final metamaterials is only 0.3mm. This type of ferrite loaded metamaterials offers opportunities for various sub-GHz microwave applications, such as cloaks, absorbers, and frequency selective surfaces.",
"title": ""
},
{
"docid": "68c1cf9be287d2ccbe8c9c2ed675b39e",
"text": "The primary task of the peripheral vasculature (PV) is to supply the organs and extremities with blood, which delivers oxygen and nutrients, and to remove metabolic waste products. In addition, peripheral perfusion provides the basis of local immune response, such as wound healing and inflammation, and furthermore plays an important role in the regulation of body temperature. To adequately serve its many purposes, blood flow in the PV needs to be under constant tight regulation, both on a systemic level through nervous and hormonal control, as well as by local factors, such as metabolic tissue demand and hydrodynamic parameters. As a matter of fact, the body does not retain sufficient blood volume to fill the entire vascular space, and only 25% of the capillary bed is in use during resting state. The importance of microvascular control is clearly illustrated by the disastrous effects of uncontrolled blood pooling in the extremities, such as occurring during certain types of shock. Peripheral vascular disease (PVD) is the general name for a host of pathologic conditions of disturbed PV function. Peripheral vascular disease includes occlusive diseases of the arteries and the veins. An example is peripheral arterial occlusive disease (PAOD), which is the result of a buildup of plaque on the inside of the arterial walls, inhibiting proper blood supply to the organs. Symptoms include pain and cramping in extremities, as well as fatigue; ultimately, PAOD threatens limb vitality. The PAOD is often indicative of atherosclerosis of the heart and brain, and is therefore associated with an increased risk of myocardial infarction or cerebrovascular accident (stroke). Venous occlusive disease is the forming of blood clots in the veins, usually in the legs. Clots pose a risk of breaking free and traveling toward the lungs, where they can cause pulmonary embolism. In the legs, thromboses interfere with the functioning of the venous valves, causing blood pooling in the leg (postthrombotic syndrome) that leads to swelling and pain. Other causes of disturbances in peripheral perfusion include pathologies of the autoregulation of the microvasculature, such as in Reynaud’s disease or as a result of diabetes. To monitor vascular function, and to diagnose and monitor PVD, it is important to be able to measure and evaluate basic vascular parameters, such as arterial and venous blood flow, arterial blood pressure, and vascular compliance. Many peripheral vascular parameters can be assessed with invasive or minimally invasive procedures. Examples are the use of arterial catheters for blood pressure monitoring and the use of contrast agents in vascular X ray imaging for the detection of blood clots. Although they are sensitive and accurate, invasive methods tend to be more cumbersome to use, and they generally bear a greater risk of adverse effects compared to noninvasive techniques. These factors, in combination with their usually higher cost, limit the use of invasive techniques as screening tools. Another drawback is their restricted use in clinical research because of ethical considerations. Although many of the drawbacks of invasive techniques are overcome by noninvasive methods, the latter typically are more challenging because they are indirect measures, that is, they rely on external measurements to deduce internal physiologic parameters. Noninvasive techniques often make use of physical and physiologic models, and one has to be mindful of imperfections in the measurements and the models, and their impact on the accuracy of results. Noninvasive methods therefore require careful validation and comparison to accepted, direct measures, which is the reason why these methods typically undergo long development cycles. Even though the genesis of many noninvasive techniques reaches back as far as the late nineteenth century, it was the technological advances of the second half of the twentieth century in such fields as micromechanics, microelectronics, and computing technology that led to the development of practical implementations. The field of noninvasive vascular measurements has undergone a developmental explosion over the last two decades, and it is still very much a field of ongoing research and development. This article describes the most important and most frequently used methods for noninvasive assessment of 234 PERIPHERAL VASCULAR NONINVASIVE MEASUREMENTS",
"title": ""
},
{
"docid": "e8366d4e7f59fc32da001d3513cf8eee",
"text": "Multiview LSA (MVLSA) is a generalization of Latent Semantic Analysis (LSA) that supports the fusion of arbitrary views of data and relies on Generalized Canonical Correlation Analysis (GCCA). We present an algorithm for fast approximate computation of GCCA, which when coupled with methods for handling missing values, is general enough to approximate some recent algorithms for inducing vector representations of words. Experiments across a comprehensive collection of test-sets show our approach to be competitive with the state of the art.",
"title": ""
},
{
"docid": "724388aac829af9671a90793b1b31197",
"text": "We present a statistical phrase-based translation model that useshierarchical phrases — phrases that contain subphrases. The model is formally a synchronous context-free grammar but is learned from a bitext without any syntactic information. Thus it can be seen as a shift to the formal machinery of syntaxbased translation systems without any linguistic commitment. In our experiments using BLEU as a metric, the hierarchical phrasebased model achieves a relative improvement of 7.5% over Pharaoh, a state-of-the-art phrase-based system.",
"title": ""
},
{
"docid": "d3501679c9652df1faaaff4c391be567",
"text": "This paper presents a demonstration of how AI can be useful in the game design and development process of a modern board game. By using an artificial intelligence algorithm to play a substantial amount of matches of the Ticket to Ride board game and collecting data, we can analyze several features of the gameplay as well as of the game board. Results revealed loopholes in the game’s rules and pointed towards trends in how the game is played. We are then led to the conclusion that large scale simulation utilizing artificial intelligence can offer valuable information regarding modern board games and their designs that would ordinarily be prohibitively expensive or time-consuming to discover manually.",
"title": ""
},
{
"docid": "23ff4a40f9a62c8a26f3cc3f8025113d",
"text": "In the early ages of implantable devices, radio frequency (RF) technologies were not commonplace due to the challenges stemming from the inherent nature of biological tissue boundaries. As technology improved and our understanding matured, the benefit of RF in biomedical applications surpassed the implementation challenges and is thus becoming more widespread. The fundamental challenge is due to the significant electromagnetic (EM) effects of the body at high frequencies. The EM absorption and impedance boundaries of biological tissue result in significant reduction of power and signal integrity for transcutaneous propagation of RF fields. Furthermore, the dielectric properties of the body tissue surrounding the implant must be accounted for in the design of its RF components, such as antennas and inductors, and the tissue is often heterogeneous and the properties are highly variable. Additional challenges for implantable applications include the need for miniaturization, power minimization, and often accounting for a conductive casing due to biocompatibility and hermeticity requirements [1]?[3]. Today, wireless technologies are essentially a must have in most electrical implants due to the need to communicate with the device and even transfer usable energy to the implant [4], [5]. Low-frequency wireless technologies face fewer challenges in this implantable setting than its higher frequency, or RF, counterpart, but are limited to much lower communication speeds and typically have a very limited operating distance. The benefits of high-speed communication and much greater communication distances in biomedical applications have spawned numerous wireless standards committees, and the U.S. Federal Communications Commission (FCC) has allocated numerous frequency bands for medical telemetry as well as those to specifically target implantable applications. The development of analytical models, advanced EM simulation software, and representative RF human phantom recipes has significantly facilitated design and optimization of RF components for implantable applications.",
"title": ""
},
{
"docid": "00bcce935ca2e4d443941b7e90d644c9",
"text": "Nairovirus, one of five bunyaviral genera, includes seven species. Genomic sequence information is limited for members of the Dera Ghazi Khan, Hughes, Qalyub, Sakhalin, and Thiafora nairovirus species. We used next-generation sequencing and historical virus-culture samples to determine 14 complete and nine coding-complete nairoviral genome sequences to further characterize these species. Previously unsequenced viruses include Abu Mina, Clo Mor, Great Saltee, Hughes, Raza, Sakhalin, Soldado, and Tillamook viruses. In addition, we present genomic sequence information on additional isolates of previously sequenced Avalon, Dugbe, Sapphire II, and Zirqa viruses. Finally, we identify Tunis virus, previously thought to be a phlebovirus, as an isolate of Abu Hammad virus. Phylogenetic analyses indicate the need for reassignment of Sapphire II virus to Dera Ghazi Khan nairovirus and reassignment of Hazara, Tofla, and Nairobi sheep disease viruses to novel species. We also propose new species for the Kasokero group (Kasokero, Leopards Hill, Yogue viruses), the Ketarah group (Gossas, Issyk-kul, Keterah/soft tick viruses) and the Burana group (Wēnzhōu tick virus, Huángpí tick virus 1, Tǎchéng tick virus 1). Our analyses emphasize the sister relationship of nairoviruses and arenaviruses, and indicate that several nairo-like viruses (Shāyáng spider virus 1, Xīnzhōu spider virus, Sānxiá water strider virus 1, South Bay virus, Wǔhàn millipede virus 2) require establishment of novel genera in a larger nairovirus-arenavirus supergroup.",
"title": ""
},
{
"docid": "0c57dd3ce1f122d3eb11a98649880475",
"text": "Insulin resistance plays a major role in the pathogenesis of the metabolic syndrome and type 2 diabetes, and yet the mechanisms responsible for it remain poorly understood. Magnetic resonance spectroscopy studies in humans suggest that a defect in insulin-stimulated glucose transport in skeletal muscle is the primary metabolic abnormality in insulin-resistant patients with type 2 diabetes. Fatty acids appear to cause this defect in glucose transport by inhibiting insulin-stimulated tyrosine phosphorylation of insulin receptor substrate-1 (IRS-1) and IRS-1-associated phosphatidylinositol 3-kinase activity. A number of different metabolic abnormalities may increase intramyocellular and intrahepatic fatty acid metabolites; these include increased fat delivery to muscle and liver as a consequence of either excess energy intake or defects in adipocyte fat metabolism, and acquired or inherited defects in mitochondrial fatty acid oxidation. Understanding the molecular and biochemical defects responsible for insulin resistance is beginning to unveil novel therapeutic targets for the treatment of the metabolic syndrome and type 2 diabetes.",
"title": ""
},
{
"docid": "e0f89b22f215c140f69a22e6b573df41",
"text": "In this paper, a 10-bit 0.5V 100 kS/s successive approximation register (SAR) analog-to-digital converter (ADC) with a new fully dynamic rail-to-rail comparator is presented. The proposed comparator enhances the input signal range to the rail-to-rail mode, and hence, improves the signal-to-noise ratio (SNR) of the ADC in low supply voltages. The e®ect of the latch o®set voltage is reduced by providing a higher voltage gain in the regenerative latch. To reduce the ADC power consumption further, the binary-weighted capacitive array with an attenuation capacitor (BWA) is employed as the digital-to-analog converter (DAC) in this design. The ADC is designed and simulated in a 90 nm CMOS process with a single 0.5V power supply. Spectre simulation results show that the average power consumption of the proposed ADC is about 400 nW and the peak signal-to-noise plus distortion ratio (SNDR) is 56 dB. By considering 10% increase in total ADC power consumption due to the parasitics and a loss of 0.22 LSB in ENOB due to the DAC capacitors mismatch, the achieved ̄gure of merit (FoM) is 11.4 fJ/conversion-step.",
"title": ""
},
{
"docid": "759831bb109706b6963b21984a59d2d1",
"text": "Workflow management systems will change the architecture of future information systems dramatically. The explicit representation of business procedures is one of the main issues when introducing a workflow management system. In this paper we focus on a class of Petri nets suitable for the representation, validation and verification of these procedures. We will show that the correctness of a procedure represented by such a Petri net can be verified by using standard Petri-net-based techniques. Based on this result we provide a comprehensive set of transformation rules which can be used to construct and modify correct procedures.",
"title": ""
},
{
"docid": "a7287ea0f78500670fb32fc874968c54",
"text": "Image captioning is a challenging task where the machine automatically describes an image by sentences or phrases. It often requires a large number of paired image-sentence annotations for training. However, a pre-trained captioning model can hardly be applied to a new domain in which some novel object categories exist, i.e., the objects and their description words are unseen during model training. To correctly caption the novel object, it requires professional human workers to annotate the images by sentences with the novel words. It is labor expensive and thus limits its usage in real-world applications. In this paper, we introduce the zero-shot novel object captioning task where the machine generates descriptions without extra training sentences about the novel object. To tackle the challenging problem, we propose a Decoupled Novel Object Captioner (DNOC) framework that can fully decouple the language sequence model from the object descriptions. DNOC has two components. 1) A Sequence Model with the Placeholder (SM-P) generates a sentence containing placeholders. The placeholder represents an unseen novel object. Thus, the sequence model can be decoupled from the novel object descriptions. 2) A key-value object memory built upon the freely available detection model, contains the visual information and the corresponding word for each object. A query generated from the SM-P is used to retrieve the words from the object memory. The placeholder will further be filled with the correct word, resulting in a caption with novel object descriptions. The experimental results on the held-out MSCOCO dataset demonstrate the ability of DNOC in describing novel concepts.",
"title": ""
},
{
"docid": "477be87ed75b8245de5e084a366b7a6d",
"text": "This paper addresses the problem of using unmanned aerial vehicles for the transportation of suspended loads. The proposed solution introduces a novel control law capable of steering the aerial robot to a desired reference while simultaneously limiting the sway of the payload. The stability of the equilibrium is proven rigorously through the application of the nested saturation formalism. Numerical simulations demonstrating the effectiveness of the controller are provided.",
"title": ""
},
{
"docid": "c26e9f486621e37d66bf0925d8ff2a3e",
"text": "We report the first two Malaysian children with partial deletion 9p syndrome, a well delineated but rare clinical entity. Both patients had trigonocephaly, arching eyebrows, anteverted nares, long philtrum, abnormal ear lobules, congenital heart lesions and digital anomalies. In addition, the first patient had underdeveloped female genitalia and anterior anus. The second patient had hypocalcaemia and high arched palate and was initially diagnosed with DiGeorge syndrome. Chromosomal analysis revealed a partial deletion at the short arm of chromosome 9. Karyotyping should be performed in patients with craniostenosis and multiple abnormalities as an early syndromic diagnosis confers prognostic, counselling and management implications.",
"title": ""
},
{
"docid": "d76d09ca1e87eb2e08ccc03428c62be0",
"text": "Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms [11]. Are the algorithms very different? Is access to good/big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be trained on same data, and tested at the million scale. MF2 is a public large-scale set with 672K identities and 4.7M photos created with the goal to level playing field for large scale face recognition. We contrast our results with findings from the other two large-scale benchmarks MegaFace Challenge and MS-Celebs-1M where groups were allowed to train on any private/public/big/small set. Some key discoveries: 1) algorithms, trained on MF2, were able to achieve state of the art and comparable results to algorithms trained on massive private sets, 2) some outperformed themselves once trained on MF2, 3) invariance to aging suffers from low accuracies as in MegaFace, identifying the need for larger age variations possibly within identities or adjustment of algorithms in future testing.",
"title": ""
},
{
"docid": "ce7d164774826897e9d7386ec9159bba",
"text": "The homomorphic encryption problem has been an open one for three decades. Recently, Gentry has proposed a full solution. Subsequent works have made improvements on it. However, the time complexities of these algorithms are still too high for practical use. For example, Gentry’s homomorphic encryption scheme takes more than 900 seconds to add two 32 bit numbers, and more than 67000 seconds to multiply them. In this paper, we develop a non-circuit based symmetric-key homomorphic encryption scheme. It is proven that the security of our encryption scheme is equivalent to the large integer factorization problem, and it can withstand an attack with up to lnpoly chosen plaintexts for any predetermined , where is the security parameter. Multiplication, encryption, and decryption are almost linear in , and addition is linear in . Performance analyses show that our algorithm runs multiplication in 108 milliseconds and addition in a tenth of a millisecond for = 1024 and = 16. We further consider practical multiple-user data-centric applications. Existing homomorphic encryption schemes only consider one master key. To allow multiple users to retrieve data from a server, all users need to have the same key. In this paper, we propose to transform the master encryption key into different user keys and develop a protocol to support correct and secure communication between the users and the server using different user keys. In order to prevent collusion between some user and the server to derive the master key, one or more key agents can be added to mediate the interaction.",
"title": ""
}
] |
scidocsrr
|
39f503ca38fba95c34d6da204039c84e
|
5G Millimeter-Wave Antenna Array: Design and Challenges
|
[
{
"docid": "40d28bd6b2caedec17a0990b8020c918",
"text": "The fourth generation wireless communication systems have been deployed or are soon to be deployed in many countries. However, with an explosion of wireless mobile devices and services, there are still some challenges that cannot be accommodated even by 4G, such as the spectrum crisis and high energy consumption. Wireless system designers have been facing the continuously increasing demand for high data rates and mobility required by new wireless applications and therefore have started research on fifth generation wireless systems that are expected to be deployed beyond 2020. In this article, we propose a potential cellular architecture that separates indoor and outdoor scenarios, and discuss various promising technologies for 5G wireless communication systems, such as massive MIMO, energy-efficient communications, cognitive radio networks, and visible light communications. Future challenges facing these potential technologies are also discussed.",
"title": ""
},
{
"docid": "ed676ff14af6baf9bde3bdb314628222",
"text": "The ever growing traffic explosion in mobile communications has recently drawn increased attention to the large amount of underutilized spectrum in the millimeter-wave frequency bands as a potentially viable solution for achieving tens to hundreds of times more capacity compared to current 4G cellular networks. Historically, mmWave bands were ruled out for cellular usage mainly due to concerns regarding short-range and non-line-of-sight coverage issues. In this article, we present recent results from channel measurement campaigns and the development of advanced algorithms and a prototype, which clearly demonstrate that the mmWave band may indeed be a worthy candidate for next generation (5G) cellular systems. The results of channel measurements carried out in both the United States and Korea are summarized along with the actual free space propagation measurements in an anechoic chamber. Then a novel hybrid beamforming scheme and its link- and system-level simulation results are presented. Finally, recent results from our mmWave prototyping efforts along with indoor and outdoor test results are described to assert the feasibility of mmWave bands for cellular usage.",
"title": ""
}
] |
[
{
"docid": "9b2e025c6bb8461ddb076301003df0e4",
"text": "People are sharing their opinions, stories and reviews through online video sharing websites every day. Studying sentiment and subjectivity in these opinion videos is experiencing a growing attention from academia and industry. While sentiment analysis has been successful for text, it is an understudied research question for videos and multimedia content. The biggest setbacks for studies in this direction are lack of a proper dataset, methodology, baselines and statistical analysis of how information from different modality sources relate to each other. This paper introduces to the scientific community the first opinion-level annotated corpus of sentiment and subjectivity analysis in online videos called Multimodal Opinionlevel Sentiment Intensity dataset (MOSI). The dataset is rigorously annotated with labels for subjectivity, sentiment intensity, per-frame and per-opinion annotated visual features, and per-milliseconds annotated audio features. Furthermore, we present baselines for future studies in this direction as well as a new multimodal fusion approach that jointly models spoken words and visual gestures.",
"title": ""
},
{
"docid": "204ae059e0856f8531b67b707ee3f068",
"text": "In highly regulated industries such as aerospace, the introduction of new quality standard can provide the framework for developing and formulating innovative novel business models which become the foundation to build a competitive, customer-centric enterprise. A number of enterprise modeling methods have been developed in recent years mainly to offer support for enterprise design and help specify systems requirements and solutions. However, those methods are inefficient in providing sufficient support for quality systems links and assessment. The implementation parts of the processes linked to the standards remain unclear and ambiguous for the practitioners as a result of new standards introduction. This paper proposed to integrate new revision of AS/EN9100 aerospace quality elements through systematic integration approach which can help the enterprises in business re-engineering process. The assessment capability model is also presented to identify impacts on the existing system as a result of introducing new standards.",
"title": ""
},
{
"docid": "2d17b30942ce0984dcbcf5ca5ba38bd2",
"text": "We review the literature on the relation between narcissism and consumer behavior. Consumer behavior is sometimes guided by self-related motives (e.g., self-enhancement) rather than by rational economic considerations. Narcissism is a case in point. This personality trait reflects a self-centered, self-aggrandizing, dominant, and manipulative orientation. Narcissists are characterized by exhibitionism and vanity, and they see themselves as superior and entitled. To validate their grandiose self-image, narcissists purchase high-prestige products (i.e., luxurious, exclusive, flashy), show greater interest in the symbolic than utilitarian value of products, and distinguish themselves positively from others via their materialistic possessions. Our review lays the foundation for a novel methodological approach in which we explore how narcissism influences eye movement behavior during consumer decision-making. We conclude with a description of our experimental paradigm and report preliminary results. Our findings will provide insight into the mechanisms underlying narcissists' conspicuous purchases. They will also likely have implications for theories of personality, consumer behavior, marketing, advertising, and visual cognition.",
"title": ""
},
{
"docid": "14fe4e2fb865539ad6f767b9fc9c1ff5",
"text": "BACKGROUND\nFetal tachyarrhythmia may result in low cardiac output and death. Consequently, antiarrhythmic treatment is offered in most affected pregnancies. We compared 3 drugs commonly used to control supraventricular tachycardia (SVT) and atrial flutter (AF).\n\n\nMETHODS AND RESULTS\nWe reviewed 159 consecutive referrals with fetal SVT (n=114) and AF (n=45). Of these, 75 fetuses with SVT and 36 with AF were treated nonrandomly with transplacental flecainide (n=35), sotalol (n=52), or digoxin (n=24) as a first-line agent. Prenatal treatment failure was associated with an incessant versus intermittent arrhythmia pattern (n=85; hazard ratio [HR]=3.1; P<0.001) and, for SVT, with fetal hydrops (n=28; HR=1.8; P=0.04). Atrial flutter had a lower rate of conversion to sinus rhythm before delivery than SVT (HR=2.0; P=0.005). Cardioversion at 5 and 10 days occurred in 50% and 63% of treated SVT cases, respectively, but in only 25% and 41% of treated AF cases. Sotalol was associated with higher rates of prenatal AF termination than digoxin (HR=5.4; P=0.05) or flecainide (HR=7.4; P=0.03). If incessant AF/SVT persisted to day 5 (n=45), median ventricular rates declined more with flecainide (-22%) and digoxin (-13%) than with sotalol (-5%; P<0.001). Flecainide (HR=2.1; P=0.02) and digoxin (HR=2.9; P=0.01) were also associated with a higher rate of conversion of fetal SVT to a normal rhythm over time. No serious drug-related adverse events were observed, but arrhythmia-related mortality was 5%.\n\n\nCONCLUSION\nFlecainide and digoxin were superior to sotalol in converting SVT to a normal rhythm and in slowing both AF and SVT to better-tolerated ventricular rates and therefore might be considered first to treat significant fetal tachyarrhythmia.",
"title": ""
},
{
"docid": "a1e50fdb1bde8730a3201d771135eb68",
"text": "This paper briefly introduces an approach to the problem of building semantic interpretations of nominal ComDounds, i.e. sequences of two or more nouns related through modification. Examples of the kinds of nominal compounds dealt with are: \"engine repairs\", \"aircraft flight arrival\", ~aluminum water pump\", and \"noun noun modification\".",
"title": ""
},
{
"docid": "fd7a6e8eed4391234812018237434283",
"text": "Due to the increase of the number of wind turbines connected directly to the electric utility grid, new regulator codes have been issued that require low-voltage ride-through capability for wind turbines so that they can remain online and support the electric grid during voltage sags. Conventional ride-through techniques for the doubly fed induction generator (DFIG) architecture result in compromised control of the turbine shaft and grid current during fault events. In this paper, a series passive-impedance network at the stator side of a DFIG wind turbine is presented. It is easy to control, capable of off-line operation for high efficiency, and low cost for manufacturing and maintenance. The balanced and unbalanced fault responses of a DFIG wind turbine with a series grid side passive-impedance network are examined using computer simulations and hardware experiments.",
"title": ""
},
{
"docid": "2059db0707ffc28fd62b7387ba6d09ae",
"text": "Embedded quantization is a mechanism employed by many lossy image codecs to progressively refine the distortion of a (transformed) image. Currently, the most common approach to do so in the context of wavelet-based image coding is to couple uniform scalar deadzone quantization (USDQ) with bitplane coding (BPC). USDQ+BPC is convenient for its practicality and has proved to achieve competitive coding performance. But the quantizer established by this scheme does not allow major variations. This paper introduces a multistage quantization scheme named general embedded quantization (GEQ) that provides more flexibility to the quantizer. GEQ schemes can be devised for specific decoding rates achieving optimal coding performance. Practical approaches of GEQ schemes achieve coding performance similar to that of USDQ+BPC while requiring fewer quantization stages. The performance achieved by GEQ is evaluated in this paper through experimental results carried out in the framework of modern image coding systems.",
"title": ""
},
{
"docid": "103ec725b4c07247f1a8884610ea0e42",
"text": "In this paper we have introduced the notion of distance between two single valued neutrosophic sets and studied its properties. We have also defined several similarity measures between them and investigated their characteristics. A measure of entropy of a single valued neutrosophic set has also been introduced.",
"title": ""
},
{
"docid": "8284163c893d79213b6573249a0f0a32",
"text": "Clustering is a core building block for data analysis, aiming to extract otherwise hidden structures and relations from raw datasets, such as particular groups that can be effectively related, compared, and interpreted. A plethora of visual-interactive cluster analysis techniques has been proposed to date, however, arriving at useful clusterings often requires several rounds of user interactions to fine-tune the data preprocessing and algorithms. We present a multi-stage Visual Analytics (VA) approach for iterative cluster refinement together with an implementation (SOMFlow) that uses Self-Organizing Maps (SOM) to analyze time series data. It supports exploration by offering the analyst a visual platform to analyze intermediate results, adapt the underlying computations, iteratively partition the data, and to reflect previous analytical activities. The history of previous decisions is explicitly visualized within a flow graph, allowing to compare earlier cluster refinements and to explore relations. We further leverage quality and interestingness measures to guide the analyst in the discovery of useful patterns, relations, and data partitions. We conducted two pair analytics experiments together with a subject matter expert in speech intonation research to demonstrate that the approach is effective for interactive data analysis, supporting enhanced understanding of clustering results as well as the interactive process itself.",
"title": ""
},
{
"docid": "e353843f2f5102c263d18382168e2c69",
"text": "The number of adult learners who participate in online learning has rapidly grown in the last two decades due to online learning's many advantages. In spite of the growth, the high dropout rate in online learning has been of concern to many higher education institutions and organizations. The purpose of this study was to determine whether persistent learners and dropouts are different in individual characteristics (i.e., age, gender, and educational level), external factors (i.e., family and organizational supports), and internal factors (i.e., satisfaction and relevance as sub-dimensions of motivation). Quantitative data were collected from 147 learners who had dropped out of or finished one of the online courses offered from a large Midwestern university. Dropouts and persistent learners showed statistical differences in perceptions of family and organizational support, and satisfaction and relevance. It was also shown that the theoretical framework, which includes family support, organizational support, satisfaction, and relevance in addition to individual characteristics, is able to predict learners' decision to drop out or persist. Organizational 9upport and relevance were shown to be particularly predictive. The results imply that lower dropout rates can be achieved if online program developers or instrdctors find ways to enhance the relevance of the course. It also implies thai adult learners need to be supported by their organizations in order for them to finish online courses that they register for.",
"title": ""
},
{
"docid": "1501d5173376a06a3b9c30c617abfe31",
"text": "^^ir jEdmund Hillary of Mount Everest \\ fajne liked to tell a story about one of ^J Captain Robert Falcon Scott's earlier attempts, from 1901 to 1904, to reach the South Pole. Scott led an expedition made up of men from thb Royal Navy and the merchant marine, as jwell as a group of scientists. Scott had considel'able trouble dealing with the merchant n|arine personnel, who were unaccustomed ip the rigid discipline of Scott's Royal Navy. S|:ott wanted to send one seaman home because he would not take orders, but the seaman refused, arguing that he had signed a contract and knew his rights. Since the seaman wds not subject to Royal Navy disciplinary action, Scott did not know what to do. Then Ernest Shackleton, a merchant navy officer in $cott's party, calmly informed the seaman th^t he, the seaman, was returning to Britain. Again the seaman refused —and Shackle^on knocked him to the ship's deck. After ar^other refusal, followed by a second flooring, the seaman decided he would retuijn home. Scott later became one of the victims of his own inadequacies as a leader in his 1911 race to the South Pole. Shackleton went qn to lead many memorable expeditions; once, seeking help for the rest of his party, who were stranded on the Antarctic Coast, he journeyed with a small crew in a small open boat from the edge of Antarctica to Souilh Georgia Island.",
"title": ""
},
{
"docid": "343a2035ca2136bc38451c0e92aeb7fc",
"text": "Synaptic plasticity is considered to be the biological substrate of learning and memory. In this document we review phenomenological models of short-term and long-term synaptic plasticity, in particular spike-timing dependent plasticity (STDP). The aim of the document is to provide a framework for classifying and evaluating different models of plasticity. We focus on phenomenological synaptic models that are compatible with integrate-and-fire type neuron models where each neuron is described by a small number of variables. This implies that synaptic update rules for short-term or long-term plasticity can only depend on spike timing and, potentially, on membrane potential, as well as on the value of the synaptic weight, or on low-pass filtered (temporally averaged) versions of the above variables. We examine the ability of the models to account for experimental data and to fulfill expectations derived from theoretical considerations. We further discuss their relations to teacher-based rules (supervised learning) and reward-based rules (reinforcement learning). All models discussed in this paper are suitable for large-scale network simulations.",
"title": ""
},
{
"docid": "b0741999659724f8fa5dc1117ec86f0d",
"text": "With the rapidly growing scales of statistical problems, subset based communicationfree parallel MCMC methods are a promising future for large scale Bayesian analysis. In this article, we propose a new Weierstrass sampler for parallel MCMC based on independent subsets. The new sampler approximates the full data posterior samples via combining the posterior draws from independent subset MCMC chains, and thus enjoys a higher computational efficiency. We show that the approximation error for the Weierstrass sampler is bounded by some tuning parameters and provide suggestions for choice of the values. Simulation study shows the Weierstrass sampler is very competitive compared to other methods for combining MCMC chains generated for subsets, including averaging and kernel smoothing.",
"title": ""
},
{
"docid": "be7f7d9c6a28b7d15ec381570752de95",
"text": "Neural network are most popular in the research community due to its generalization abilities. Additionally, it has been successfully implemented in biometrics, features selection, object tracking, document image preprocessing and classification. This paper specifically, clusters, summarize, interpret and evaluate neural networks in document Image preprocessing. The importance of the learning algorithms in neural networks training and testing for preprocessing is also highlighted. Finally, a critical analysis on the reviewed approaches and the future research guidelines in the field are suggested.",
"title": ""
},
{
"docid": "0b11d414b25a0bc7262dafc072264ff2",
"text": "Selecting appropriate words to compose a sentence is one common problem faced by non-native Chinese learners. In this paper, we propose (bidirectional) LSTM sequence labeling models and explore various features to detect word usage errors in Chinese sentences. By combining CWINDOW word embedding features and POS information, the best bidirectional LSTM model achieves accuracy 0.5138 and MRR 0.6789 on the HSK dataset. For 80.79% of the test data, the model ranks the groundtruth within the top two at position level.",
"title": ""
},
{
"docid": "dca2900c2b002e3119435bcf983c5aac",
"text": "Substantial evidence suggests that the accumulation of beta-amyloid (Abeta)-derived peptides contributes to the aetiology of Alzheimer's disease (AD) by stimulating formation of free radicals. Thus, the antioxidant alpha-lipoate, which is able to cross the blood-brain barrier, would seem an ideal substance in the treatment of AD. We have investigated the potential effectiveness of alpha-lipoic acid (LA) against cytotoxicity induced by Abeta peptide (31-35) (30 microM) and hydrogen peroxide (H(2)O(2)) (100 microM) with the cellular 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) reduction and fluorescence dye propidium iodide assays in primary neurons of rat cerebral cortex. We found that treatment with LA protected cortical neurons against cytotoxicity induced by Abeta or H(2)O(2). In addition, LA-induced increase in the level of Akt in the neurons was observed by Western blot. The LA-induced neuroprotection and Akt increase were attenuated by pre-treatment with the phosphatidylinositol 3-kinase inhibitor, LY294002 (50 microM). Our data suggest that the neuroprotective effects of the antioxidant LA are partly mediated through activation of the PKB/Akt signaling pathway.",
"title": ""
},
{
"docid": "07c63e6e7ec9e64e9f19ec099e6c3c00",
"text": "Despite their remarkable performance in various machine intelligence tasks, the computational intensity of Convolutional Neural Networks (CNNs) has hindered their widespread utilization in resource-constrained embedded and IoT systems. To address this problem, we present a framework for synthesis of efficient CNN inference software targeting mobile SoC platforms. We argue that thread granularity can substantially impact the performance and energy dissipation of the synthesized inference software, and demonstrate that launching the maximum number of logical threads, often promoted as a guiding principle by GPGPU practitioners, does not result in an efficient implementation for mobile SoCs. We hypothesize that the runtime of a CNN layer on a particular SoC platform can be accurately estimated as a linear function of its computational complexity, which may seem counter-intuitive, as modern mobile SoCs utilize a plethora of heterogeneous architectural features and dynamic resource management policies. Consequently, we develop a principled approach and a data-driven analytical model to optimize granularity of threads during CNN software synthesis. Experimental results with several modern CNNs mapped to a commodity Android smartphone with a Snapdragon SoC show up to 2.37X speedup in application runtime, and up to 1.9X improvement in its energy dissipation compared to existing approaches.",
"title": ""
},
{
"docid": "da981709f7a0ff7f116fe632b7a989db",
"text": "A method is presented for locating protein antigenic determinants by analyzing amino acid sequences in order to find the point of greatest local hydrophilicity. This is accomplished by assigning each amino acid a numerical value (hydrophilicity value) and then repetitively averaging these values along the peptide chain. The point of highest local average hydrophilicity is invariably located in, or immediately adjacent to, an antigenic determinant. It was found that the prediction success rate depended on averaging group length, with hexapeptide averages yielding optimal results. The method was developed using 12 proteins for which extensive immunochemical analysis has been carried out and subsequently was used to predict antigenic determinants for the following proteins: hepatitis B surface antigen, influenza hemagglutinins, fowl plague virus hemagglutinin, human histocompatibility antigen HLA-B7, human interferons, Escherichia coli and cholera enterotoxins, ragweed allergens Ra3 and Ra5, and streptococcal M protein. The hepatitis B surface antigen sequence was synthesized by chemical means and was shown to have antigenic activity by radioimmunoassay.",
"title": ""
},
{
"docid": "6f2720e4f63b5d3902810ee5b2c17f2b",
"text": "Latent structured prediction theory proposes powerful methods such as Latent Structural SVM (LSSVM), which can potentially be very appealing for coreference resolution (CR). In contrast, only small work is available, mainly targeting the latent structured perceptron (LSP). In this paper, we carried out a practical study comparing for the first time online learning with LSSVM. We analyze the intricacies that may have made initial attempts to use LSSVM fail, i.e., a huge training time and much lower accuracy produced by Kruskal’s spanning tree algorithm. In this respect, we also propose a new effective feature selection approach for improving system efficiency. The results show that LSP, if correctly parameterized, produces the same performance as LSSVM, being at the same time much more efficient.",
"title": ""
},
{
"docid": "695264db0ca1251ab0f63b04d41c68cd",
"text": "Reading comprehension tasks test the ability of models to process long-term context and remember salient information. Recent work has shown that relatively simple neural methods such as the Attention Sum-Reader can perform well on these tasks; however, these systems still significantly trail human performance. Analysis suggests that many of the remaining hard instances are related to the inability to track entity-references throughout documents. This work focuses on these hard entity tracking cases with two extensions: (1) additional entity features, and (2) training with a multi-task tracking objective. We show that these simple modifications improve performance both independently and in combination, and we outperform the previous state of the art on the LAMBADA dataset, particularly on difficult entity examples.",
"title": ""
}
] |
scidocsrr
|
64f762aaf0e35b18b6c5c9804f5fcf45
|
HAGP: A Hub-Centric Asynchronous Graph Processing Framework for Scale-Free Graph
|
[
{
"docid": "216d4c4dc479588fb91a27e35b4cb403",
"text": "At extreme scale, irregularities in the structure of scale-free graphs such as social network graphs limit our ability to analyze these important and growing datasets. A key challenge is the presence of high-degree vertices (hubs), that leads to parallel workload and storage imbalances. The imbalances occur because existing partitioning techniques are not able to effectively partition high-degree vertices.\n We present techniques to distribute storage, computation, and communication of hubs for extreme scale graphs in distributed memory supercomputers. To balance the hub processing workload, we distribute hub data structures and related computation among a set of delegates. The delegates coordinate using highly optimized, yet portable, asynchronous broadcast and reduction operations. We demonstrate scalability of our new algorithmic technique using Breadth-First Search (BFS), Single Source Shortest Path (SSSP), K-Core Decomposition, and PageRank on synthetically generated scale-free graphs. Our results show excellent scalability on large scale-free graphs up to 131K cores of the IBM BG/P, and outperform the best known Graph500 performance on BG/P Intrepid by 15%.",
"title": ""
},
{
"docid": "e9b89400c6bed90ac8c9465e047538e7",
"text": "Myriad of graph-based algorithms in machine learning and data mining require parsing relational data iteratively. These algorithms are implemented in a large-scale distributed environment to scale to massive data sets. To accelerate these large-scale graph-based iterative computations, we propose delta-based accumulative iterative computation (DAIC). Different from traditional iterative computations, which iteratively update the result based on the result from the previous iteration, DAIC updates the result by accumulating the “changes” between iterations. By DAIC, we can process only the “changes” to avoid the negligible updates. Furthermore, we can perform DAIC asynchronously to bypass the high-cost synchronous barriers in heterogeneous distributed environments. Based on the DAIC model, we design and implement an asynchronous graph processing framework, Maiter. We evaluate Maiter on local cluster as well as on Amazon EC2 Cloud. The results show that Maiter achieves as much as 60 × speedup over Hadoop and outperforms other state-of-the-art frameworks.",
"title": ""
}
] |
[
{
"docid": "3f5f7b099dff64deca2a265c89ff481e",
"text": "We describe a learning-based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labeling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We evaluate several different regression methods: ridge regression, relevance vector machine (RVM) regression, and support vector machine (SVM) regression over both linear and kernel bases. The RVMs provide much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. The loss of depth and limb labeling information often makes the recovery of 3D pose from single silhouettes ambiguous. To handle this, the method is embedded in a novel regressive tracking framework, using dynamics from the previous state estimate together with a learned regression value to disambiguate the pose. We show that the resulting system tracks long sequences stably. For realism and good generalization over a wide range of viewpoints, we train the regressors on images resynthesized from real human motion capture data. The method is demonstrated for several representations of full body pose, both quantitatively on independent but similar test data and qualitatively on real image sequences. Mean angular errors of 4-6/spl deg/ are obtained for a variety of walking motions.",
"title": ""
},
{
"docid": "176dc8d5d0ed24cc9822924ae2b8ca9b",
"text": "Detection of image forgery is an important part of digital forensics and has attracted a lot of attention in the past few years. Previous research has examined residual pattern noise, wavelet transform and statistics, image pixel value histogram and other features of images to authenticate the primordial nature. With the development of neural network technologies, some effort has recently applied convolutional neural networks to detecting image forgery to achieve high-level image representation. This paper proposes to build a convolutional neural network different from the related work in which we try to understand extracted features from each convolutional layer and detect different types of image tampering through automatic feature learning. The proposed network involves five convolutional layers, two full-connected layers and a Softmax classifier. Our experiment has utilized CASIA v1.0, a public image set that contains authentic images and splicing images, and its further reformed versions containing retouching images and re-compressing images as the training data. Experimental results can clearly demonstrate the effectiveness and adaptability of the proposed network.",
"title": ""
},
{
"docid": "5c0a3aa0a50487611a64905655164b89",
"text": "Cloud radio access network (C-RAN) refers to the visualization of base station functionalities by means of cloud computing. This results in a novel cellular architecture in which low-cost wireless access points, known as radio units or remote radio heads, are centrally managed by a reconfigurable centralized \"cloud\", or central, unit. C-RAN allows operators to reduce the capital and operating expenses needed to deploy and maintain dense heterogeneous networks. This critical advantage, along with spectral efficiency, statistical multiplexing and load balancing gains, make C-RAN well positioned to be one of the key technologies in the development of 5G systems. In this paper, a succinct overview is presented regarding the state of the art on the research on C-RAN with emphasis on fronthaul compression, baseband processing, medium access control, resource allocation, system-level considerations and standardization efforts.",
"title": ""
},
{
"docid": "95bbe5d13f3ca5f97d01f2692a9dc77a",
"text": "Moringa oleifera Lam. (family; Moringaceae), commonly known as drumstick, have been used for centuries as a part of the Ayurvedic system for several diseases without having any scientific data. Demineralized water was used to prepare aqueous extract by maceration for 24 h and complete metabolic profiling was performed using GC-MS and HPLC. Hypoglycemic properties of extract have been tested on carbohydrate digesting enzyme activity, yeast cell uptake, muscle glucose uptake, and intestinal glucose absorption. Type 2 diabetes was induced by feeding high-fat diet (HFD) for 8 weeks and a single injection of streptozotocin (STZ, 45 mg/kg body weight, intraperitoneally) was used for the induction of type 1 diabetes. Aqueous extract of M. oleifera leaf was given orally at a dose of 100 mg/kg to STZ-induced rats and 200 mg/kg in HFD mice for 3 weeks after diabetes induction. Aqueous extract remarkably inhibited the activity of α-amylase and α-glucosidase and it displayed improved antioxidant capacity, glucose tolerance and rate of glucose uptake in yeast cell. In STZ-induced diabetic rats, it produces a maximum fall up to 47.86% in acute effect whereas, in chronic effect, it was 44.5% as compared to control. The fasting blood glucose, lipid profile, liver marker enzyme level were significantly (p < 0.05) restored in both HFD and STZ experimental model. Multivariate principal component analysis on polar and lipophilic metabolites revealed clear distinctions in the metabolite pattern in extract and in blood after its oral administration. Thus, the aqueous extract can be used as phytopharmaceuticals for the management of diabetes by using as adjuvants or alone.",
"title": ""
},
{
"docid": "af973255ab5f85a5dfb8dd73c19891a0",
"text": "I use the example of the 2000 US Presidential election to show that political controversies with technical underpinnings are not resolved by technical means. Then, drawing from examples such as climate change, genetically modified foods, and nuclear waste disposal, I explore the idea that scientific inquiry is inherently and unavoidably subject to becoming politicized in environmental controversies. I discuss three reasons for this. First, science supplies contesting parties with their own bodies of relevant, legitimated facts about nature, chosen in part because they help make sense of, and are made sensible by, particular interests and normative frameworks. Second, competing disciplinary approaches to understanding the scientific bases of an environmental controversy may be causally tied to competing value-based political or ethical positions. The necessity of looking at nature through a variety of disciplinary lenses brings with it a variety of normative lenses, as well. Third, it follows from the foregoing that scientific uncertainty, which so often occupies a central place in environmental controversies, can be understood not as a lack of scientific understanding but as the lack of coherence among competing scientific understandings, amplified by the various political, cultural, and institutional contexts within which science is carried out. In light of these observations, I briefly explore the problem of why some types of political controversies become “scientized” and others do not, and conclude that the value bases of disputes underlying environmental controversies must be fully articulated and adjudicated through political means before science can play an effective role in resolving environmental problems. © 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6b6b4de917de527351939c3493581275",
"text": "Several studies have used the Edinburgh Postnatal Depression Scale (EPDS), developed to screen new mothers, also for new fathers. This study aimed to further contribute to this knowledge by comparing assessment of possible depression in fathers and associated demographic factors by the EPDS and the Gotland Male Depression Scale (GMDS), developed for \"male\" depression screening. The study compared EPDS score ≥10 and ≥12, corresponding to minor and major depression, respectively, in relation to GMDS score ≥13. At 3-6 months after child birth, a questionnaire was sent to 8,011 fathers of whom 3,656 (46%) responded. The detection of possibly depressed fathers by EPDS was 8.1% at score ≥12, comparable to the 8.6% detected by the GMDS. At score ≥10, the proportion detected by EPDS increased to 13.3%. Associations with possible risk factors were analyzed for fathers detected by one or both scales. A low income was associated with depression in all groups. Fathers detected by EPDS alone were at higher risk if they had three or more children, or lower education. Fathers detected by EPDS alone at score ≥10, or by both scales at EPDS score ≥12, more often were born in a foreign country. Seemingly, the EPDS and the GMDS are associated with different demographic risk factors. The EPDS score appears critical since 5% of possibly depressed fathers are excluded at EPDS cutoff 12. These results suggest that neither scale alone is sufficient for depression screening in new fathers, and that the decision of EPDS cutoff is crucial.",
"title": ""
},
{
"docid": "4d2c5785e60fa80febb176165622fca7",
"text": "In this paper, we propose a new algorithm to compute intrinsic means of organ shapes from 3D medical images. More specifically, we explore the feasibility of Karcher means in the framework of the large deformations by diffeomorphisms (LDDMM). This setting preserves the topology of the averaged shapes and has interesting properties to quantitatively describe their anatomical variability. Estimating Karcher means requires to perform multiple registrations between the averaged template image and the set of reference 3D images. Here, we use a recent algorithm based on an optimal control method to satisfy the geodesicity of the deformations at any step of each registration. We also combine this algorithm with organ specific metrics. We demonstrate the efficiency of our methodology with experimental results on different groups of anatomical 3D images. We also extensively discuss the convergence of our method and the bias due to the initial guess. A direct perspective of this work is the computation of 3D+time atlases.",
"title": ""
},
{
"docid": "5946378b291a1a0e1fb6df5cd57d716f",
"text": "Robots are being deployed in an increasing variety of environments for longer periods of time. As the number of robots grows, they will increasingly need to interact with other robots. Additionally, the number of companies and research laboratories producing these robots is increasing, leading to the situation where these robots may not share a common communication or coordination protocol. While standards for coordination and communication may be created, we expect that robots will need to additionally reason intelligently about their teammates with limited information. This problem motivates the area of ad hoc teamwork in which an agent may potentially cooperate with a variety of teammates in order to achieve a shared goal. This article focuses on a limited version of the ad hoc teamwork problem in which an agent knows the environmental dynamics and has had past experiences with other teammates, though these experiences may not be representative of the current teammates. To tackle this problem, this article introduces a new general-purpose algorithm, PLASTIC, that reuses knowledge learned from previous teammates or provided by experts to quickly adapt to new teammates. This algorithm is instantiated in two forms: 1) PLASTIC–Model – which builds models of previous teammates’ behaviors and plans behaviors online using these models and 2) PLASTIC–Policy – which learns policies for cooperating with previous teammates and selects among these policies online. We evaluate PLASTIC on two benchmark tasks: the pursuit domain and robot soccer in the RoboCup 2D simulation domain. Recognizing that a key requirement of ad hoc teamwork is adaptability to previously unseen agents, the tests use more than 40 previously unknown teams on the first task and 7 previously unknown teams on the second. While PLASTIC assumes that there is some degree of similarity between the current and past teammates’ behaviors, no steps are taken in the experimental setup to make sure this assumption holds. The teammates ✩This article contains material from 4 prior conference papers [11–14]. Email addresses: [email protected] (Samuel Barrett), [email protected] (Avi Rosenfeld), [email protected] (Sarit Kraus), [email protected] (Peter Stone) 1This work was performed while Samuel Barrett was a graduate student at the University of Texas at Austin. 2Corresponding author. Preprint submitted to Elsevier October 30, 2016 To appear in http://dx.doi.org/10.1016/j.artint.2016.10.005 Artificial Intelligence (AIJ)",
"title": ""
},
{
"docid": "a27b626618e225b03bec1eea8327be4d",
"text": "As a fundamental preprocessing of various multimedia applications, object proposal aims to detect the candidate windows possibly containing arbitrary objects in images with two typical strategies, window scoring and grouping. In this paper, we first analyze the feasibility of improving object proposal performance by integrating window scoring and grouping strategies. Then, we propose a novel object proposal method for RGB-D images, named elastic edge boxes. The initial bounding boxes of candidate object regions are efficiently generated by edge boxes, and further adjusted by grouping the super-pixels within elastic range to obtain more accurate candidate windows. To validate the proposed method, we construct the largest RGB-D image data set NJU1800 for object proposal with balanced object number distribution. The experimental results show that our method can effectively and efficiently generate the candidate windows of object regions and it outperforms the state-of-the-art methods considering both accuracy and efficiency.",
"title": ""
},
{
"docid": "8654b1d03f46c1bb94b237977c92ff02",
"text": "Many studies suggest using coverage concepts, such as branch coverage, as the starting point of testing, while others as the most prominent test quality indicator. Yet the relationship between coverage and fault-revelation remains unknown, yielding uncertainty and controversy. Most previous studies rely on the Clean Program Assumption, that a test suite will obtain similar coverage for both faulty and fixed ('clean') program versions. This assumption may appear intuitive, especially for bugs that denote small semantic deviations. However, we present evidence that the Clean Program Assumption does not always hold, thereby raising a critical threat to the validity of previous results. We then conducted a study using a robust experimental methodology that avoids this threat to validity, from which our primary finding is that strong mutation testing has the highest fault revelation of four widely-used criteria. Our findings also revealed that fault revelation starts to increase significantly only once relatively high levels of coverage are attained.",
"title": ""
},
{
"docid": "897fb39d295defc4b6e495236a2c74b1",
"text": "Generative modeling of high-dimensional data is a key problem in machine learning. Successful approaches include latent variable models and autoregressive models. The complementary strengths of these approaches, to model global and local image statistics respectively, suggest hybrid models combining the strengths of both models. Our contribution is to train such hybrid models using an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder. In contrast, prior work on such hybrid models needed to limit the capacity of the autoregressive decoder to prevent degenerate models that ignore the latent variables and only rely on autoregressive modeling. Our approach results in models with meaningful latent variable representations, and which rely on powerful autoregressive decoders to model image details. Our model generates qualitatively convincing samples, and yields stateof-the-art quantitative results.",
"title": ""
},
{
"docid": "1e9e3fce7ae4e980658997c2984f05cb",
"text": "BACKGROUND\nMotivation in learning behaviour and education is well-researched in general education, but less in medical education.\n\n\nAIM\nTo answer two research questions, 'How has the literature studied motivation as either an independent or dependent variable? How is motivation useful in predicting and understanding processes and outcomes in medical education?' in the light of the Self-determination Theory (SDT) of motivation.\n\n\nMETHODS\nA literature search performed using the PubMed, PsycINFO and ERIC databases resulted in 460 articles. The inclusion criteria were empirical research, specific measurement of motivation and qualitative research studies which had well-designed methodology. Only studies related to medical students/school were included.\n\n\nRESULTS\nFindings of 56 articles were included in the review. Motivation as an independent variable appears to affect learning and study behaviour, academic performance, choice of medicine and specialty within medicine and intention to continue medical study. Motivation as a dependent variable appears to be affected by age, gender, ethnicity, socioeconomic status, personality, year of medical curriculum and teacher and peer support, all of which cannot be manipulated by medical educators. Motivation is also affected by factors that can be influenced, among which are, autonomy, competence and relatedness, which have been described as the basic psychological needs important for intrinsic motivation according to SDT.\n\n\nCONCLUSION\nMotivation is an independent variable in medical education influencing important outcomes and is also a dependent variable influenced by autonomy, competence and relatedness. This review finds some evidence in support of the validity of SDT in medical education.",
"title": ""
},
{
"docid": "7b341e406c28255d3cb4df5c4665062d",
"text": "We propose MRU (Multi-Range Reasoning Units), a new fast compositional encoder for machine comprehension (MC). Our proposed MRU encoders are characterized by multi-ranged gating, executing a series of parameterized contractand-expand layers for learning gating vectors that benefit from long and short-term dependencies. The aims of our approach are as follows: (1) learning representations that are concurrently aware of long and short-term context, (2) modeling relationships between intra-document blocks and (3) fast and efficient sequence encoding. We show that our proposed encoder demonstrates promising results both as a standalone encoder and as well as a complementary building block. We conduct extensive experiments on three challenging MC datasets, namely RACE, SearchQA and NarrativeQA, achieving highly competitive performance on all. On the RACE benchmark, our model outperforms DFN (Dynamic Fusion Networks) by 1.5% − 6% without using any recurrent or convolution layers. Similarly, we achieve competitive performance relative to AMANDA [17] on the SearchQA benchmark and BiDAF [23] on the NarrativeQA benchmark without using any LSTM/GRU layers. Finally, incorporating MRU encoders with standard BiLSTM architectures further improves performance, achieving state-of-the-art results.",
"title": ""
},
{
"docid": "d78609519636e288dae4b1fce36cb7a6",
"text": "Intelligent vehicles have increased their capabilities for highly and, even fully, automated driving under controlled environments. Scene information is received using onboard sensors and communication network systems, i.e., infrastructure and other vehicles. Considering the available information, different motion planning and control techniques have been implemented to autonomously driving on complex environments. The main goal is focused on executing strategies to improve safety, comfort, and energy optimization. However, research challenges such as navigation in urban dynamic environments with obstacle avoidance capabilities, i.e., vulnerable road users (VRU) and vehicles, and cooperative maneuvers among automated and semi-automated vehicles still need further efforts for a real environment implementation. This paper presents a review of motion planning techniques implemented in the intelligent vehicles literature. A description of the technique used by research teams, their contributions in motion planning, and a comparison among these techniques is also presented. Relevant works in the overtaking and obstacle avoidance maneuvers are presented, allowing the understanding of the gaps and challenges to be addressed in the next years. Finally, an overview of future research direction and applications is given.",
"title": ""
},
{
"docid": "c404e6ecb21196fec9dfeadfcb5d4e4b",
"text": "The goal of leading indicators for safety is to identify the potential for an accident before it occurs. Past efforts have focused on identifying general leading indicators, such as maintenance backlog, that apply widely in an industry or even across industries. Other recommendations produce more system-specific leading indicators, but start from system hazard analysis and thus are limited by the causes considered by the traditional hazard analysis techniques. Most rely on quantitative metrics, often based on probabilistic risk assessments. This paper describes a new and different approach to identifying system-specific leading indicators and provides guidance in designing a risk management structure to generate, monitor and use the results. The approach is based on the STAMP (SystemTheoretic Accident Model and Processes) model of accident causation and tools that have been designed to build on that model. STAMP extends current accident causality to include more complex causes than simply component failures and chains of failure events or deviations from operational expectations. It incorporates basic principles of systems thinking and is based on systems theory rather than traditional reliability theory.",
"title": ""
},
{
"docid": "d2c0e71db2957621eca42bdc221ffb8f",
"text": "Financial time sequence analysis has been a popular research topic in the field of finance, data science and machine learning. It is a highly challenging due to the extreme complexity within the sequences. Mostly existing models are failed to capture its intrinsic information, factor and tendency. To improve the previous approaches, in this paper, we propose a Hidden Markov Model (HMMs) based approach to analyze the financial time sequence. The fluctuation of financial time sequence was predicted through introducing a dual-state HMMs. Dual-state HMMs models the sequence and produces the features which will be delivered to SVMs for prediction. Note that we cast a financial time sequence prediction problem to a classification problem. To evaluate the proposed approach, we use Shanghai Composite Index as the dataset for empirically experiments. The dataset was collected from 550 consecutive trading days, and is randomly split to the training set and test set. The extensively experimental results show that: when analyzing financial time sequence, the mean-square error calculated with HMMs was obviously smaller error than the compared GARCH approach. Therefore, when using HMM to predict the fluctuation of financial time sequence, it achieves higher accuracy and exhibits several attractive advantageous over GARCH approach.",
"title": ""
},
{
"docid": "ff83e090897ed7b79537392801078ffb",
"text": "Component-based software engineering has had great impact in the desktop and server domain and is spreading to other domains as well, such as embedded systems. Agile software development is another approach which has gained much attention in recent years, mainly for smaller-scale production of less critical systems. Both of them promise to increase system quality, development speed and flexibility, but so far little has been published on the combination of the two approaches. This paper presents a comprehensive analysis of the applicability of the agile approach in the development processes of 1) COTS components and 2) COTS-based systems. The study method is a systematic theoretical examination and comparison of the fundamental concepts and characteristics of these approaches. The contributions are: first, an enumeration of identified contradictions between the approaches, and suggestions how to bridge these incompatibilities to some extent. Second, the paper provides some more general comments, considerations, and application guidelines concerning the introduction of agile principles into the development of COTS components or COTS-based systems. This study thus forms a framework which will guide further empirical studies.",
"title": ""
},
{
"docid": "6562b9b46d17bf983bcef7f486ecbc36",
"text": "Upper-extremity venous thrombosis often presents as unilateral arm swelling. The differential diagnosis includes lesions compressing the veins and causing a functional venous obstruction, venous stenosis, an infection causing edema, obstruction of previously functioning lymphatics, or the absence of sufficient lymphatic channels to ensure effective drainage. The following recommendations are made with the understanding that venous disease, specifically venous thrombosis, is the primary diagnosis to be excluded or confirmed in a patient presenting with unilateral upper-extremity swelling. Contrast venography remains the best reference-standard diagnostic test for suspected upper-extremity acute venous thrombosis and may be needed whenever other noninvasive strategies fail to adequately image the upper-extremity veins. Duplex, color flow, and compression ultrasound have also established a clear role in evaluation of the more peripheral veins that are accessible to sonography. Gadolinium contrast-enhanced MRI is routinely used to evaluate the status of the central veins. Delayed CT venography can often be used to confirm or exclude more central vein venous thrombi, although substantial contrast loads are required. The ACR Appropriateness Criteria(®) are evidence-based guidelines for specific clinical conditions that are reviewed every 2 years by a multidisciplinary expert panel. The guideline development and review include an extensive analysis of current medical literature from peer-reviewed journals and the application of a well-established consensus methodology (modified Delphi) to rate the appropriateness of imaging and treatment procedures by the panel. In those instances in which evidence is lacking or not definitive, expert opinion may be used to recommend imaging or treatment.",
"title": ""
},
{
"docid": "eb6643fba28b6b84b4d51a565fc97be0",
"text": "The spiral antenna is a well known kind of wideband antenna. The challenges to improve its design are numerous, such as creating a compact wideband matched feeding or controlling the radiation pattern. Here we propose a self matched and compact slot spiral antenna providing a unidirectional pattern.",
"title": ""
}
] |
scidocsrr
|
797dad33f2f98c2954816565895666ba
|
BRISK: Binary Robust invariant scalable keypoints
|
[
{
"docid": "e32f77e31a452ae6866652ce69c5faaa",
"text": "The efficient detection of interesting features is a crucial step for various tasks in Computer Vision. Corners are favored cues due to their two dimensional constraint and fast algorithms to detect them. Recently, a novel corner detection approach, FAST, has been presented which outperforms previous algorithms in both computational performance and repeatability. We will show how the accelerated segment test, which underlies FAST, can be significantly improved by making it more generic while increasing its performance. We do so by finding the optimal decision tree in an extended configuration space, and demonstrating how specialized trees can be combined to yield an adaptive and generic accelerated segment test. The resulting method provides high performance for arbitrary environments and so unlike FAST does not have to be adapted to a specific scene structure. We will also discuss how different test patterns affect the corner response of the accelerated segment test.",
"title": ""
}
] |
[
{
"docid": "2a4eb6d12a50034b5318d246064cb86e",
"text": "In this paper, we study the 3D volumetric modeling problem by adopting the Wasserstein introspective neural networks method (WINN) that was previously applied to 2D static images. We name our algorithm 3DWINN which enjoys the same properties as WINN in the 2D case: being simultaneously generative and discriminative. Compared to the existing 3D volumetric modeling approaches, 3DWINN demonstrates competitive results on several benchmarks in both the generation and the classification tasks. In addition to the standard inception score, the Fréchet Inception Distance (FID) metric is also adopted to measure the quality of 3D volumetric generations. In addition, we study adversarial attacks for volumetric data and demonstrate the robustness of 3DWINN against adversarial examples while achieving appealing results in both classification and generation within a single model. 3DWINN is a general framework and it can be applied to the emerging tasks for 3D object and scene modeling.",
"title": ""
},
{
"docid": "1a6ece40fa87e787f218902eba9b89f7",
"text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.",
"title": ""
},
{
"docid": "05ba530d5f07e141d18c3f9b92a6280d",
"text": "In this paper, we introduce autoencoder ensembles for unsupervised outlier detection. One problem with neural networks is that they are sensitive to noise and often require large data sets to work robustly, while increasing data size makes them slow. As a result, there are only a few existing works in the literature on the use of neural networks in outlier detection. This paper shows that neural networks can be a very competitive technique to other existing methods. The basic idea is to randomly vary on the connectivity architecture of the autoencoder to obtain significantly better performance. Furthermore, we combine this technique with an adaptive sampling method to make our approach more efficient and effective. Experimental results comparing the proposed approach with state-of-theart detectors are presented on several benchmark data sets showing the accuracy of our approach.",
"title": ""
},
{
"docid": "b0eea601ef87dbd1d7f39740ea5134ae",
"text": "Syndromal classification is a well-developed diagnostic system but has failed to deliver on its promise of the identification of functional pathological processes. Functional analysis is tightly connected to treatment but has failed to develop testable. replicable classification systems. Functional diagnostic dimensions are suggested as a way to develop the functional classification approach, and experiential avoidance is described as 1 such dimension. A wide range of research is reviewed showing that many forms of psychopathology can be conceptualized as unhealthy efforts to escape and avoid emotions, thoughts, memories, and other private experiences. It is argued that experiential avoidance, as a functional diagnostic dimension, has the potential to integrate the efforts and findings of researchers from a wide variety of theoretical paradigms, research interests, and clinical domains and to lead to testable new approaches to the analysis and treatment of behavioral disorders. Steven C. Haves, Kelly G. Wilson, Elizabeth V. Gifford, and Victoria M. Follette. Department of Psychology. University of Nevada: Kirk Strosahl, Mental Health Center, Group Health Cooperative, Seattle, Washington. Preparation of this article was supported in part by Grant DA08634 from the National Institute on Drug Abuse. Correspondence concerning this article should be addressed to Steven C. Hayes, Department of Psychology, Mailstop 296, College of Arts and Science. University of Nevada, Reno, Nevada 89557-0062. The process of classification lies at the root of all scientific behavior. It is literally impossible to speak about a truly unique event, alone and cut off from all others, because words themselves are means of categorization (Brunei, Goodnow, & Austin, 1956). Science is concerned with refined and systematic verbal formulations of events and relations among events. Because \"events\" are always classes of events, and \"relations\" are always classes of relations, classification is one of the central tasks of science. The field of psychopathology has seen myriad classification systems (Hersen & Bellack, 1988; Sprock & Blashfield, 1991). The differences among some of these approaches are both long-standing and relatively unchanging, in part because systems are never free from a priori assumptions and guiding principles that provide a framework for organizing information (Adams & Cassidy, 1993). In the present article, we briefly examine the differences between two core classification strategies in psychopathology syndromal and functional. We then articulate one possible functional diagnostic dimension: experiential avoidance. Several common syndromal categories are examined to see how this dimension can organize data found among topographical groupings. Finally, the utility and implications of this functional dimensional category are examined. Comparing Syndromal and Functional Classification Although there are many purposes to diagnostic classification, most researchers seem to agree that the ultimate goal is the development of classes, dimensions, or relational categories that can be empirically wedded to treatment strategies (Adams & Cassidy, 1993: Hayes, Nelson & Jarrett, 1987: Meehl, 1959). Syndromal classification – whether dimensional or categorical – can be traced back to Wundt and Galen and, thus, is as old as scientific psychology itself (Eysenck, 1986). Syndromal classification starts with constellations of signs and symptoms to identify the disease entities that are presumed to give rise to these constellations. Syndromal classification thus starts with structure and, it is hoped, ends with utility. The attempt in functional classification, conversely, is to start with utility by identifying functional processes with clear treatment implications. It then works backward and returns to the issue of identifiable signs and symptoms that reflect these processes. These differences are fundamental. Syndromal Classification The economic and political dominance of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (e.g., 4th ed.; DSM -IV; American Psychiatric Association, 1994) has lead to a worldwide adoption of syndromal classification as an analytic strategy in psychopathology. The only widely used alternative, the International Classification of Diseases (ICD) system, was a source document for the original DSM, and continuous efforts have been made to ensure their ongoing compatibility (American Psychiatric Association 1994). The immediate goal of syndromal classification (Foulds. 1971) is to identify collections of signs (what one sees) and symptoms (what the client's complaint is). The hope is that these syndromes will lead to the identification of disorders with a known etiology, course, and response to treatment. When this has been achieved, we are no longer speaking of syndromes but of diseases. Because the construct of disease involves etiology and response to treatment, these classifications are ultimately a kind of functional unit. Thus, the syndromal classification approach is a topographically oriented classification strategy for the identification of functional units of abnormal behavior. When the same topographical outcome can be established by diverse processes, or when very different topographical outcomes can come from the same process, the syndromal model has a difficult time actually producing its intended functional units (cf. Bandura, 1982; Meehl, 1978). Some medical problems (e.g., cancer) have these features, and in these areas medical researchers no longer look to syndromal classification as a quick route to an understanding of the disease processes involved. The link between syndromes (topography of signs and symptoms) and diseases (function) has been notably weak in psychopathology. After over 100 years of effort, almost no psychological diseases have been clearly identified. With the exception of general paresis and a few clearly neurological disorders, psychiatric syndromes have remained syndromes indefinitely. In the absence of progress toward true functional entities, syndromal classification of psychopathology has several down sides. Symptoms are virtually non-falsifiable, because they depend only on certain formal features. Syndromal categories tend to evolve changing their names frequently and splitting into ever finer subcategories but except for political reasons (e.g., homosexuality as a disorder) they rarely simply disappear. As a result, the number of syndromes within the DSM system has increased exponentially (Follette, Houts, & Hayes, 1992). Increasingly refined topographical distinctions can always be made without the restraining and synthesizing effect of the identification of common etiological processes. In physical medicine, syndromes regularly disappear into disease categories. A wide variety of symptoms can be caused by a single disease, or a common symptom can be explained by very different diseases entities. For example, \"headaches\" are not a disease, because they could be due to influenza, vision problems, ruptured blood vessels, or a host of other factors. These etiological factors have very different treatment implications. Note that the reliability of symptom detection is not what is at issue. Reliably diagnosing headaches does not translate into reliably diagnosing the underlying functional entity, which after all is the crucial factor for treatment decisions. In the same way, the increasing reliability of DSM diagnoses is of little consolation in and of itself. The DSM system specifically eschews the primary importance of functional processes: \"The approach taken in DSM-III is atheoretical with regard to etiology or patho-physiological process\" (American Psychiatric Association, 1980, p. 7). This spirit of etiological agnosticism is carried forward in the most recent DSM incarnation. It is meant to encourage users from widely varying schools of psychology to use the same classification system. Although integration is a laudable goal, the price paid may have been too high (Follette & Hayes, 1992). For example, the link between syndromal categories and biological markers or change processes has been consistently disappointing. To date, compellingly sensitive and specific physiological markers have not been identified for any psychiatric syndrome (Hoes, 1986). Similarly, the link between syndromes and differential treatment has long been known to be weak (see Hayes et al., 1987). We still do not have compelling evidence that syndromal classification contributes substantially to treatment outcome (Hayes et al., 1987). Even in those few instances and not others, mechanisms of change are often unclear of unexamined (Follette, 1995), in part because syndromal categories give researchers few leads about where even to look. Without attention to etiology, treatment utility, and pathological process, the current syndromal system seems unlikely to evolve rapidly into a functional, theoretically relevant system. Functional Classification In a functional approach to classification, the topographical characteristics of any particular individual's behavior is not the basis for classification; instead, behaviors and sets of behaviors are organized by the functional processes that are thought to have produced and maintained them. This functional method is inherently less direct and naive than a syndromal approach, as it requires the application of pre-existing information about psychological processes to specific response forms. It thus integrates at least rudimentary forms of theory into the classification strategy, in sharp contrast with the atheoretical goals of the DSM system. Functional Diagnostic Dimensions as a Method of Functional Classification Classical functional analysis is the most dominant example of a functional classification system. It consists of six steps (Hayes & Follette, 1992) -Step 1: identify potentially relevant characterist",
"title": ""
},
{
"docid": "3ab85b8f58e60f4e59d6be49648ce290",
"text": "It is basically a solved problem for a server to authenticate itself to a client using standard methods of Public Key cryptography. The Public Key Infrastructure (PKI) supports the SSL protocol which in turn enables this functionality. The single-point-of-failure in PKI, and hence the focus of attacks, is the Certi cation Authority. However this entity is commonly o -line, well defended, and not easily got at. For a client to authenticate itself to the server is much more problematical. The simplest and most common mechanism is Username/Password. Although not at all satisfactory, the only onus on the client is to generate and remember a password and the reality is that we cannot expect a client to be su ciently sophisticated or well organised to protect larger secrets. However Username/Password as a mechanism is breaking down. So-called zero-day attacks on servers commonly recover les containing information related to passwords, and unless the passwords are of su ciently high entropy they will be found. The commonly applied patch is to insist that clients adopt long, complex, hard-to-remember passwords. This is essentially a second line of defence imposed on the client to protect them in the (increasingly likely) event that the authentication server will be successfully hacked. Note that in an ideal world a client should be able to use a low entropy password, as a server can limit the number of attempts the client can make to authenticate itself. The often proposed alternative is the adoption of multifactor authentication. In the simplest case the client must demonstrate possession of both a token and a password. The banks have been to the forefront of adopting such methods, but the token is invariably a physical device of some kind. Cryptography's embarrassing secret is that to date no completely satisfactory means has been discovered to implement two-factor authentication entirely in software. In this paper we propose such a scheme.",
"title": ""
},
{
"docid": "15a37341901e410e2754ae46d7ba11e7",
"text": "Extraction-transformation-loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. Usually, these processes must be completed in a certain time window; thus, it is necessary to optimize their execution time. In this paper, we delve into the logical optimization of ETL processes, modeling it as a state-space search problem. We consider each ETL workflow as a state and fabricate the state space through a set of correct state transitions. Moreover, we provide algorithms towards the minimization of the execution cost of an ETL workflow.",
"title": ""
},
{
"docid": "ea544ffc7eeee772388541d0d01812a7",
"text": "Despite the fact that MRI has evolved to become the standard method for diagnosis and monitoring of patients with brain tumours, conventional MRI sequences have two key limitations: the inability to show the full extent of the tumour and the inability to differentiate neoplastic tissue from nonspecific, treatment-related changes after surgery, radiotherapy, chemotherapy or immunotherapy. In the past decade, PET involving the use of radiolabelled amino acids has developed into an important diagnostic tool to overcome some of the shortcomings of conventional MRI. The Response Assessment in Neuro-Oncology working group — an international effort to develop new standardized response criteria for clinical trials in brain tumours — has recommended the additional use of amino acid PET imaging for brain tumour management. Concurrently, a number of advanced MRI techniques such as magnetic resonance spectroscopic imaging and perfusion weighted imaging are under clinical evaluation to target the same diagnostic problems. This Review summarizes the clinical role of amino acid PET in relation to advanced MRI techniques for differential diagnosis of brain tumours; delineation of tumour extent for treatment planning and biopsy guidance; post-treatment differentiation between tumour progression or recurrence versus treatment-related changes; and monitoring response to therapy. An outlook for future developments in PET and MRI techniques is also presented.",
"title": ""
},
{
"docid": "f63503eb721aa7c1fd6b893c2c955fdf",
"text": "In 2008, financial tsunami started to impair the economic development of many countries, including Taiwan. The prediction of financial crisis turns to be much more important and doubtlessly holds public attention when the world economy goes to depression. This study examined the predictive ability of the four most commonly used financial distress prediction models and thus constructed reliable failure prediction models for public industrial firms in Taiwan. Multiple discriminate analysis (MDA), logit, probit, and artificial neural networks (ANNs) methodology were employed to a dataset of matched sample of failed and non-failed Taiwan public industrial firms during 1998–2005. The final models are validated using within sample test and out-of-the-sample test, respectively. The results indicated that the probit, logit, and ANN models which used in this study achieve higher prediction accuracy and possess the ability of generalization. The probit model possesses the best and stable performance. However, if the data does not satisfy the assumptions of the statistical approach, then the ANN approach would demonstrate its advantage and achieve higher prediction accuracy. In addition, the models which used in this study achieve higher prediction accuracy and possess the ability of generalization than those of [Altman, Financial ratios—discriminant analysis and the prediction of corporate bankruptcy using capital market data, Journal of Finance 23 (4) (1968) 589–609, Ohlson, Financial ratios and the probability prediction of bankruptcy, Journal of Accounting Research 18 (1) (1980) 109–131, and Zmijewski, Methodological issues related to the estimation of financial distress prediction models, Journal of Accounting Research 22 (1984) 59–82]. In summary, the models used in this study can be used to assist investors, creditors, managers, auditors, and regulatory agencies in Taiwan to predict the probability of business failure. & 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5d8fc02f96206da7ccb112866951d4c7",
"text": "Immersive technologies such as augmented reality devices are opening up a new design space for the visual analysis of data. This paper studies the potential of an augmented reality environment for the purpose of collaborative analysis of multidimensional, abstract data. We present ART, a collaborative analysis tool to visualize multidimensional data in augmented reality using an interactive, 3D parallel coordinates visualization. The visualization is anchored to a touch-sensitive tabletop, benefiting from well-established interaction techniques. The results of group-based, expert walkthroughs show that ART can facilitate immersion in the data, a fluid analysis process, and collaboration. Based on the results, we provide a set of guidelines and discuss future research areas to foster the development of immersive technologies as tools for the collaborative analysis of multidimensional data.",
"title": ""
},
{
"docid": "bf9d706685f76877a56d323423b32a5c",
"text": "BACKGROUND\nFine particulate air pollution has been linked to cardiovascular disease, but previous studies have assessed only mortality and differences in exposure between cities. We examined the association of long-term exposure to particulate matter of less than 2.5 microm in aerodynamic diameter (PM2.5) with cardiovascular events.\n\n\nMETHODS\nWe studied 65,893 postmenopausal women without previous cardiovascular disease in 36 U.S. metropolitan areas from 1994 to 1998, with a median follow-up of 6 years. We assessed the women's exposure to air pollutants using the monitor located nearest to each woman's residence. Hazard ratios were estimated for the first cardiovascular event, adjusting for age, race or ethnic group, smoking status, educational level, household income, body-mass index, and presence or absence of diabetes, hypertension, or hypercholesterolemia.\n\n\nRESULTS\nA total of 1816 women had one or more fatal or nonfatal cardiovascular events, as confirmed by a review of medical records, including death from coronary heart disease or cerebrovascular disease, coronary revascularization, myocardial infarction, and stroke. In 2000, levels of PM2.5 exposure varied from 3.4 to 28.3 microg per cubic meter (mean, 13.5). Each increase of 10 microg per cubic meter was associated with a 24% increase in the risk of a cardiovascular event (hazard ratio, 1.24; 95% confidence interval [CI], 1.09 to 1.41) and a 76% increase in the risk of death from cardiovascular disease (hazard ratio, 1.76; 95% CI, 1.25 to 2.47). For cardiovascular events, the between-city effect appeared to be smaller than the within-city effect. The risk of cerebrovascular events was also associated with increased levels of PM2.5 (hazard ratio, 1.35; 95% CI, 1.08 to 1.68).\n\n\nCONCLUSIONS\nLong-term exposure to fine particulate air pollution is associated with the incidence of cardiovascular disease and death among postmenopausal women. Exposure differences within cities are associated with the risk of cardiovascular disease.",
"title": ""
},
{
"docid": "7dcc7cdff8a9196c716add8a1faf0203",
"text": "Power modulators for compact, repetitive systems are continually faced with new requirements as the corresponding system objectives increase. Changes in pulse rate frequency or number of pulses significantly impact the design of the power conditioning system. In order to meet future power supply requirements, we have developed several high voltage (HV) capacitor charging power supplies (CCPS). This effort focuses on a volume of 6\" x 6\" x 14\" and a weight of 25 lbs. The primary focus was to increase the effective capacitor charge rate, or power output, for the given size and weight. Although increased power output was the principal objective, efficiency and repeatability were also considered. A number of DC-DC converter topologies were compared to determine the optimal design. In order to push the limits of output power, numerous resonant converter parameters were examined. Comparisons of numerous topologies, HV transformers and rectifiers, and switching frequency ranges are presented. The impacts of the control system and integration requirements are also considered.",
"title": ""
},
{
"docid": "a7959808cb41963e8d204c3078106842",
"text": "Human alteration of the global environment has triggered the sixth major extinction event in the history of life and caused widespread changes in the global distribution of organisms. These changes in biodiversity alter ecosystem processes and change the resilience of ecosystems to environmental change. This has profound consequences for services that humans derive from ecosystems. The large ecological and societal consequences of changing biodiversity should be minimized to preserve options for future solutions to global environmental problems.",
"title": ""
},
{
"docid": "5a11ab9ece5295d4d1d16401625ab3d4",
"text": "The hardware implementation of deep neural networks (DNNs) has recently received tremendous attention since many applications require high-speed operations. However, numerous processing elements and complex interconnections are usually required, leading to a large area occupation and a high power consumption. Stochastic computing has shown promising results for area-efficient hardware implementations, even though existing stochastic algorithms require long streams that exhibit long latency. In this paper, we propose an integer form of stochastic computation and introduce some elementary circuits. We then propose an efficient implementation of a DNN based on integral stochastic computing. The proposed architecture uses integer stochastic streams and a modified Finite State Machine-based tanh function to improve the performance and reduce the latency compared to existing stochastic architectures for DNN. The simulation results show the negligible performance loss of the proposed integer stochastic DNN for different network sizes compared to their floating point versions.",
"title": ""
},
{
"docid": "9573bb5596dcec8668e9ba1b38d0b310",
"text": "Gesture is becoming an increasingly popular means of interacting with computers. However, it is still relatively costly to deploy robust gesture recognition sensors in existing mobile platforms. We present SoundWave, a technique that leverages the speaker and microphone already embedded in most commodity devices to sense in-air gestures around the device. To do this, we generate an inaudible tone, which gets frequency-shifted when it reflects off moving objects like the hand. We measure this shift with the microphone to infer various gestures. In this note, we describe the phenomena and detection algorithm, demonstrate a variety of gestures, and present an informal evaluation on the robustness of this approach across different devices and people.",
"title": ""
},
{
"docid": "26c003f70bbaade54b84dcb48d2a08c9",
"text": "Tricaine methanesulfonate (TMS) is an anesthetic that is approved for provisional use in some jurisdictions such as the United States, Canada, and the United Kingdom (UK). Many hatcheries and research studies use TMS to immobilize fish for marking or transport and to suppress sensory systems during invasive procedures. Improper TMS use can decrease fish viability, distort physiological data, or result in mortalities. Because animals may be anesthetized by junior staff or students who may have little experience in fish anesthesia, training in the proper use of TMS may decrease variability in recovery, experimental results and increase fish survival. This document acts as a primer on the use of TMS for anesthetizing juvenile salmonids, with an emphasis on its use in surgical applications. Within, we briefly describe many aspects of TMS including the legal uses for TMS, and what is currently known about the proper storage and preparation of the anesthetic. We outline methods and precautions for administration and changes in fish behavior during progressively deeper anesthesia and discuss the physiological effects of TMS and its potential for compromising fish health. Despite the challenges of working with TMS, it is currently one of the few legal options available in the USA and in other countries until other anesthetics are approved and is an important tool for the intracoelomic implantation of electronic tags in fish.",
"title": ""
},
{
"docid": "c1c9730b191f2ac9186ac704fd5b929f",
"text": "This paper reports on the results of a survey of user interface programming. The survey was widely distributed, and we received 74 responses. The results show that in today's applications, an average of 48% of the code is devoted to the user interface portion. The average time spent on the user interface portion is 45% during the design phase, 50% during the implementation phase, and 37% during the maintenance phase. 34% of the systems were implemented using a toolkit, 27% used a UIMS, 14% used an interface builder, and 26% used no tools. This appears to be because the toolkit systems had more sophisticated user interfaces. The projects using UIMSs or interface builders spent the least percent of time and code on the user interface (around 41%) suggesting that these tools are effective. In general, people were happy with the tools they used, especially the graphical interface builders. The most common problems people reported when developing a user interface included getting users' requirements, writing help text, achieving consistency, learning how to use the tools, getting acceptable performance, and communicating among various parts of the program.",
"title": ""
},
{
"docid": "9072c5ad2fbba55bdd50b5969862f7c3",
"text": "Parametricism has come to scene as an important style in both architectural design and construction where conventional Computer-Aided Design (CAD) tool has become substandard. Building Information Modeling (BIM) is a recent object-based parametric modeling tool for exploring the relationship between the geometric and non-geometric components of the model. The aim of this research is to explore the capabilities of BIM in achieving variety and flexibility in design extending from architectural to urban scale. This study proposes a method by using User Interface (UI) and Application Programming Interface (API) tools of BIM to generate a complex roof structure as a parametric family. This project demonstrates a dynamic variety in architectural scale. We hypothesized that if a function calculating the roof length is defined using a variety of inputs, it can later be applied to urban scale by utilizing a database of the inputs.",
"title": ""
},
{
"docid": "3b06bc2d72e0ae7fa75873ed70e23fc3",
"text": "Transaction traces analysis is a key utility for marketing, trend monitoring, and fraud detection purposes. However, they can also be used for designing and verification of contextual risk management systems for card-present transactions. In this paper, we presented a novel approach to collect detailed transaction traces directly from payment terminal. Thanks to that, it is possible to analyze each transaction step precisely, including its frequency and timing. We also demonstrated our approach to analyze such data based on real-life experiment. Finally, we concluded this paper with important findings for designers of such a system.",
"title": ""
},
{
"docid": "e757926fbaec4097530b9a00c1278b1c",
"text": "Many fish populations have both resident and migratory individuals. Migrants usually grow larger and have higher reproductive potential but lower survival than resident conspecifics. The ‘decision’ about migration versus residence probably depends on the individual growth rate, or a physiological process like metabolic rate which is correlated with growth rate. Fish usually mature as their somatic growth levels off, where energetic costs of maintenance approach energetic intake. After maturation, growth also stagnates because of resource allocation to reproduction. Instead of maturation, however, fish may move to an alternative feeding habitat and their fitness may thereby be increased. When doing so, maturity is usually delayed, either to the new asymptotic length, or sooner, if that gives higher expected fitness. Females often dominate among migrants and males among residents. The reason is probably that females maximize their fitness by growing larger, because their reproductive success generally increases exponentially with body size. Males, on the other hand, may maximize fitness by alternative life histories, e.g. fighting versus sneaking, as in many salmonid species where small residents are the sneakers and large migrants the fighters. Partial migration appears to be partly developmental, depending on environmental conditions, and partly genetic, inherited as a quantitative trait influenced by a number of genes.",
"title": ""
}
] |
scidocsrr
|
68e7ad7ce70918a0d31e9949a4f6095f
|
Nested Mini-Batch K-Means
|
[
{
"docid": "ed9e22167d3e9e695f67e208b891b698",
"text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.",
"title": ""
},
{
"docid": "cda19d99a87ca769bb915167f8a842e8",
"text": "Sparse coding---that is, modelling data vectors as sparse linear combinations of basis elements---is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on learning the basis set, also called dictionary, to adapt it to specific data, an approach that has recently proven to be very effective for signal reconstruction and classification in the audio and image processing domains. This paper proposes a new online optimization algorithm for dictionary learning, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples. A proof of convergence is presented, along with experiments with natural images demonstrating that it leads to faster performance and better dictionaries than classical batch algorithms for both small and large datasets.",
"title": ""
}
] |
[
{
"docid": "348702d85126ed64ca24bdc62c1146d9",
"text": "Autonomous Vehicles are currently being tested in a variety of scenarios. As we move towards Autonomous Vehicles, how should intersections look? To answer that question, we break down an intersection management into the different conundrums and scenarios involved in the trajectory planning and current approaches to solve them. Then, a brief analysis of current works in autonomous intersection is conducted. With a critical eye, we try to delve into the discrepancies of existing solutions while presenting some critical and important factors that have been addressed. Furthermore, open issues that have to be addressed are also emphasized. We also try to answer the question of how to benchmark intersection management algorithms by providing some factors that impact autonomous navigation at intersection.",
"title": ""
},
{
"docid": "c03ae003e3fd6503822480267108e2a6",
"text": "A relatively simple model of the phonological loop (A. D. Baddeley, 1986), a component of working memory, has proved capable of accommodating a great deal of experimental evidence from normal adult participants, children, and neuropsychological patients. Until recently, however, the role of this subsystem in everyday cognitive activities was unclear. In this article the authors review studies of word learning by normal adults and children, neuropsychological patients, and special developmental populations, which provide evidence that the phonological loop plays a crucial role in learning the novel phonological forms of new words. The authors propose that the primary purpose for which the phonological loop evolved is to store unfamiliar sound patterns while more permanent memory records are being constructed. Its use in retaining sequences of familiar words is, it is argued, secondary.",
"title": ""
},
{
"docid": "c55afb93606ddb88f0a9274f06eca68b",
"text": "Social media use continues to grow and is especially prevalent among young adults. It is surprising then that, in spite of this enhanced interconnectivity, young adults may be lonelier than other age groups, and that the current generation may be the loneliest ever. We propose that only image-based platforms (e.g., Instagram, Snapchat) have the potential to ameliorate loneliness due to the enhanced intimacy they offer. In contrast, text-based platforms (e.g., Twitter, Yik Yak) offer little intimacy and should have no effect on loneliness. This study (N 1⁄4 253) uses a mixed-design survey to test this possibility. Quantitative results suggest that loneliness may decrease, while happiness and satisfaction with life may increase, as a function of image-based social media use. In contrast, text-based media use appears ineffectual. Qualitative results suggest that the observed effects may be due to the enhanced intimacy offered by imagebased (versus text-based) social media use. © 2016 Published by Elsevier Ltd. “The more advanced the technology, on the whole, the more possible it is for a considerable number of human beings to imagine being somebody else.” -sociologist David Riesman.",
"title": ""
},
{
"docid": "509d77cef3f9ded37f75b0b1a1314e81",
"text": "Object class detection has been a synonym for 2D bounding box localization for the longest time, fueled by the success of powerful statistical learning techniques, combined with robust image representations. Only recently, there has been a growing interest in revisiting the promise of computer vision from the early days: to precisely delineate the contents of a visual scene, object by object, in 3D. In this paper, we draw from recent advances in object detection and 2D-3D object lifting in order to design an object class detector that is particularly tailored towards 3D object class detection. Our 3D object class detection method consists of several stages gradually enriching the object detection output with object viewpoint, keypoints and 3D shape estimates. Following careful design, in each stage it constantly improves the performance and achieves state-of-the-art performance in simultaneous 2D bounding box and viewpoint estimation on the challenging Pascal3D+ [50] dataset.",
"title": ""
},
{
"docid": "42b287804a9ce6497c3e491b3baa9a6f",
"text": "Smothering is defined as an obstruction of the air passages above the level of the epiglottis, including the nose, mouth, and pharynx. This is in contrast to choking, which is considered to be due to an obstruction of the air passages below the epiglottis. The manner of death in smothering can be homicidal, suicidal, or an accident. Accidental smothering is considered to be a rare event among middle-aged adults, yet many cases still occur. Presented here is the case of a 39-year-old woman with a history of bipolar disease who was found dead on her living room floor by her neighbors. Her hands were covered in scratches and her pet cat was found disemboweled in the kitchen with its tail hacked off. On autopsy her stomach was found to be full of cat intestines, adipose tissue, and strips of fur-covered skin. An intact left kidney and adipose tissue were found lodged in her throat just above her epiglottis. After a complete investigation, the cause of death was determined to be asphyxia by smothering due to animal tissue.",
"title": ""
},
{
"docid": "346e160403ff9eb55c665f6cb8cca481",
"text": "Tasks in visual analytics differ from typical information retrieval tasks in fundamental ways. A critical part of a visual analytics is to ask the right questions when dealing with a diverse collection of information. In this article, we introduce the design and application of an integrated exploratory visualization system called Storylines. Storylines provides a framework to enable analysts visually and systematically explore and study a body of unstructured text without prior knowledge of its thematic structure. The system innovatively integrates latent semantic indexing, natural language processing, and social network analysis. The contributions of the work include providing an intuitive and directly accessible representation of a latent semantic space derived from the text corpus, an integrated process for identifying salient lines of stories, and coordinated visualizations across a spectrum of perspectives in terms of people, locations, and events involved in each story line. The system is tested with the 2006 VAST contest data, in particular, the portion of news articles.",
"title": ""
},
{
"docid": "deccc92276cca4d064b0161fd8ee7dd9",
"text": "Vast amount of information is available on web. Data analysis applications such as extracting mutual funds information from a website, daily extracting opening and closing price of stock from a web page involves web data extraction. Huge efforts are made by lots of researchers to automate the process of web data scraping. Lots of techniques depends on the structure of web page i.e. html structure or DOM tree structure to scrap data from web page. In this paper we are presenting survey of HTML aware web scrapping techniques. Keywords— DOM Tree, HTML structure, semi structured web pages, web scrapping and Web data extraction.",
"title": ""
},
{
"docid": "ad00866e5bae76020e02c6cc76360ec8",
"text": "The CASAS architecture facilitates the development and implementation of future smart home technologies by offering an easy-to-install lightweight design that provides smart home capabilities out of the box with no customization or training.",
"title": ""
},
{
"docid": "e76afdc4a867789e6bcc92876a6b52af",
"text": "An Optimal fuzzy logic guidance (OFLG) law for a surface to air homing missile is introduced. The introduced approach is based on the well-known proportional navigation guidance (PNG) law. Particle Swarm Optimization (PSO) is used to optimize the of the membership functions' (MFs) parameters of the proposed design. The distribution of the MFs is obtained by minimizing a nonlinear constrained multi-objective optimization problem where; control effort and miss distance are treated as competing objectives. The performance of the introduced guidance law is compared with classical fuzzy logic guidance (FLG) law as well as PNG one. The simulation results show that OFLG performs better than other guidance laws. Moreover, the introduced design is shown to perform well with the existence of noisy measurements.",
"title": ""
},
{
"docid": "ced4a8b19405839cc948d877e3a42c95",
"text": "18-fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET)/computed tomography (CT) is currently the most valuable imaging technique in Hodgkin lymphoma. Since its first use in lymphomas in the 1990s, it has become the gold standard in the staging and end-of-treatment remission assessment in patients with Hodgkin lymphoma. The possibility of using early (interim) PET during first-line therapy to evaluate chemosensitivity and thus personalize treatment at this stage holds great promise, and much attention is now being directed toward this goal. With high probability, it is believed that in the near future, the result of interim PET-CT would serve as a compass to optimize treatment. Also the role of PET in pre-transplant assessment is currently evolving. Much controversy surrounds the possibility of detecting relapse after completed treatment with the use of PET in surveillance in the absence of symptoms suggestive of recurrence and the results of published studies are rather discouraging because of low positive predictive value. This review presents current knowledge about the role of 18-FDG-PET/CT imaging at each point of management of patients with Hodgkin lymphoma.",
"title": ""
},
{
"docid": "8759277ebf191306b3247877e2267173",
"text": "As organizations scale up, their collective knowledge increases, and the potential for serendipitous collaboration between members grows dramatically. However, finding people with the right expertise or interests becomes much more difficult. Semi-structured social media, such as blogs, forums, and bookmarking, present a viable platform for collaboration-if enough people participate, and if shared content is easily findable. Within the trusted confines of an organization, users can trade anonymity for a rich identity that carries information about their role, location, and position in its hierarchy.\n This paper describes WaterCooler, a tool that aggregates shared internal social media and cross-references it with an organization's directory. We deployed WaterCooler in a large global enterprise and present the results of a preliminary user study. Despite the lack of complete social networking affordances, we find that WaterCooler changed users' perceptions of their workplace, made them feel more connected to each other and the company, and redistributed users' attention outside their own business groups.",
"title": ""
},
{
"docid": "8af1865e0adfedb11d9ade95bb39f797",
"text": "In developing automated systems to recognize the emotional content of music, we are faced with a problem spanning two disparate domains: the space of human emotions and the acoustic signal of music. To address this problem, we must develop models for both data collected from humans describing their perceptions of musical mood and quantitative features derived from the audio signal. In previous work, we have presented a collaborative game, MoodSwings, which records dynamic (per-second) mood ratings from multiple players within the two-dimensional Arousal-Valence representation of emotion. Using this data, we present a system linking models of acoustic features and human data to provide estimates of the emotional content of music according to the arousal-valence space. Furthermore, in keeping with the dynamic nature of musical mood we demonstrate the potential of this approach to track the emotional changes in a song over time. We investigate the utility of a range of acoustic features based on psychoacoustic and music-theoretic representations of the audio for this application. Finally, a simplified version of our system is re-incorporated into MoodSwings as a simulated partner for single-players, providing a potential platform for furthering perceptual studies and modeling of musical mood.",
"title": ""
},
{
"docid": "8a128a099087c3dee5bbca7b2a8d8dc4",
"text": "A large class of computational problems involve the determination of properties of graphs, digraphs, integers, arrays of integers, finite families of finite sets, boolean formulas and elements of other countable domains. Through simple encodings from such domains into the set of words over a finite alphabet these problems can be converted into language recognition problems, and we can inquire into their computational complexity. It is reasonable to consider such a problem satisfactorily solved when an algorithm for its solution is found which terminates within a number of steps bounded by a polynomial in the length of the input. We show that a large number of classic unsolved problems of covering, matching, packing, routing, assignment and sequencing are equivalent, in the sense that either each of them possesses a polynomial-bounded algorithm or none of them does.",
"title": ""
},
{
"docid": "7a3441773c79b9fde64ebcf8103616a1",
"text": "SIMD parallelism has become an increasingly important mechanism for delivering performance in modern CPUs, due its power efficiency and relatively low cost in die area compared to other forms of parallelism. Unfortunately, languages and compilers for CPUs have not kept up with the hardware's capabilities. Existing CPU parallel programming models focus primarily on multi-core parallelism, neglecting the substantial computational capabilities that are available in CPU SIMD vector units. GPU-oriented languages like OpenCL support SIMD but lack capabilities needed to achieve maximum efficiency on CPUs and suffer from GPU-driven constraints that impair ease of use on CPUs. We have developed a compiler, the Intel R® SPMD Program Compiler (ispc), that delivers very high performance on CPUs thanks to effective use of both multiple processor cores and SIMD vector units. ispc draws from GPU programming languages, which have shown that for many applications the easiest way to program SIMD units is to use a single-program, multiple-data (SPMD) model, with each instance of the program mapped to one SIMD lane. We discuss language features that make ispc easy to adopt and use productively with existing software systems and show that ispc delivers up to 35x speedups on a 4-core system and up to 240× speedups on a 40-core system for complex workloads (compared to serial C++ code).",
"title": ""
},
{
"docid": "f249a6089a789e52eeadc8ae16213bc1",
"text": "We have collected a new face data set that will facilitate research in the problem of frontal to profile face verification `in the wild'. The aim of this data set is to isolate the factor of pose variation in terms of extreme poses like profile, where many features are occluded, along with other `in the wild' variations. We call this data set the Celebrities in Frontal-Profile (CFP) data set. We find that human performance on Frontal-Profile verification in this data set is only slightly worse (94.57% accuracy) than that on Frontal-Frontal verification (96.24% accuracy). However we evaluated many state-of-the-art algorithms, including Fisher Vector, Sub-SML and a Deep learning algorithm. We observe that all of them degrade more than 10% from Frontal-Frontal to Frontal-Profile verification. The Deep learning implementation, which performs comparable to humans on Frontal-Frontal, performs significantly worse (84.91% accuracy) on Frontal-Profile. This suggests that there is a gap between human performance and automatic face recognition methods for large pose variation in unconstrained images.",
"title": ""
},
{
"docid": "7d4d0e4d99b5dfe675f5f4eff5e5679f",
"text": "Remote work and intensive use of Information Technologies (IT) are increasingly common in organizations. At the same time, professional stress seems to develop. However, IS research has paid little attention to the relationships between these two phenomena. The purpose of this research in progress is to present a framework that introduces the influence of (1) new spatial and temporal constraints and of (2) intensive use of IT on employee emotions at work. Specifically, this paper relies on virtuality (e.g. Chudoba et al. 2005) and media richness (Daft and Lengel 1984) theories to determine the emotional consequences of geographically distributed work.",
"title": ""
},
{
"docid": "ab2096798261a8976846c5f72eeb18ee",
"text": "ion Description and Purpose Variable names Provide human readable names to data addresses Function names Provide human readable names to function addresses Control structures Eliminate ‘‘spaghetti’’ code (The ‘‘goto’’ statement is no longer necessary.) Argument passing Default argument values, keyword specification of arguments, variable length argument lists, etc. Data structures Allow conceptual organization of data Data typing Binds the type of the data to the type of the variable Static Insures program correctness, sacrificing generality. Dynamic Greater generality, sacrificing guaranteed correctness. Inheritance Allows creation of families of related types and easy re-use of common functionality Message dispatch Providing one name to multiple implementations of the same concept Single dispatch Dispatching to a function based on the run-time type of one argument Multiple dispatch Dispatching to a function based on the run-time type of multiple arguments. Predicate dispatch Dispatching to a function based on run-time state of arguments Garbage collection Automated memory management Closures Allow creation, combination, and use of functions as first-class values Lexical binding Provides access to values in the defining context Dynamic binding Provides access to values in the calling context (.valueEnvir in SC) Co-routines Synchronous cooperating processes Threads Asynchronous processes Lazy evaluation Allows the order of operations not to be specified. Infinitely long processes and infinitely large data structures can be specified and used as needed. Applying Language Abstractions to Computer Music The SuperCollider language provides many of the abstractions listed above. SuperCollider is a dynamically typed, single-inheritance, single-argument dispatch, garbage-collected, object-oriented language similar to Smalltalk (www.smalltalk.org). In SuperCollider, everything is an object, including basic types like letters and numbers. Objects in SuperCollider are organized into classes. The UGen class provides the abstraction of a unit generator, and the Synth class represents a group of UGens operating as a group to generate output. An instrument is constructed functionally. That is, when one writes a sound-processing function, one is actually writing a function that creates and connects unit generators. This is different from a procedural or static object specification of a network of unit generators. Instrument functions in SuperCollider can generate the network of unit generators using the full algorithmic capability of the language. For example, the following code can easily generate multiple versions of a patch by changing the values of the variables that specify the dimensions (number of exciters, number of comb delays, number of allpass delays). In a procedural language like Csound or a ‘‘wire-up’’ environment like Max, a different patch would have to be created for different values for the dimensions of the patch.",
"title": ""
},
{
"docid": "cd81ad1c571f9e9a80e2d09582b00f9a",
"text": "OBJECTIVE\nThe biologic basis for gender identity is unknown. Research has shown that the ratio of the length of the second and fourth digits (2D:4D) in mammals is influenced by biologic sex in utero, but data on 2D:4D ratios in transgender individuals are scarce and contradictory. We investigated a possible association between 2D:4D ratio and gender identity in our transgender clinic population in Albany, New York.\n\n\nMETHODS\nWe prospectively recruited 118 transgender subjects undergoing hormonal therapy (50 female to male [FTM] and 68 male to female [MTF]) for finger length measurement. The control group consisted of 37 cisgender volunteers (18 females, 19 males). The length of the second and fourth digits were measured using digital calipers. The 2D:4D ratios were calculated and analyzed with unpaired t tests.\n\n\nRESULTS\nFTM subjects had a smaller dominant hand 2D:4D ratio (0.983 ± 0.027) compared to cisgender female controls (0.998 ± 0.021, P = .029), but a ratio similar to control males (0.972 ± 0.036, P =.19). There was no difference in the 2D:4D ratio of MTF subjects (0.978 ± 0.029) compared to cisgender male controls (0.972 ± 0.036, P = .434).\n\n\nCONCLUSION\nOur findings are consistent with a biologic basis for transgender identity and the possibilities that FTM gender identity is affected by prenatal androgen activity but that MTF transgender identity has a different basis.\n\n\nABBREVIATIONS\n2D:4D = 2nd digit to 4th digit; FTM = female to male; MTF = male to female.",
"title": ""
},
{
"docid": "effdc359389fad7eb320120a6f3548d3",
"text": "Wireless communication system is a heavy dense composition of signal processing techniques with semiconductor technologies. With the ever increasing system capacity and data rate, VLSI design and implementation method for wireless communications becomes more challenging, which urges researchers in signal processing to provide new architectures and efficient algorithms to meet low power and high performance requirements. This paper presents a survey of recent research, a development in VLSI architecture and signal processing algorithms with emphasis on wireless communication systems. It is shown that while contemporary signal processing can be directly applied to the communication hardware design including ASIC, SoC, and FPGA, much work remains to realize its full potential. It is concluded that an integrated combination of VLSI and signal processing technologies will provide more complete solutions.",
"title": ""
},
{
"docid": "98ae5e9dda1be6e3c4eff68fc5ebbb4d",
"text": "Recycling today constitutes the most environmentally friendly method of managing wood waste. A large proportion of the wood waste generated consists of used furniture and other constructed wooden items, which are composed mainly of particleboard, a material which can potentially be reused. In the current research, four different hydrothermal treatments were applied in order to recover wood particles from laboratory particleboards and use them in the production of new (recycled) ones. Quality was evaluated by determining the main properties of the original (control) and the recycled boards. Furthermore, the impact of a second recycling process on the properties of recycled particleboards was studied. With the exception of the modulus of elasticity in static bending, all of the mechanical properties of the recycled boards tested decreased in comparison with the control boards. Furthermore, the recycling process had an adverse effect on their hygroscopic properties and a beneficial effect on the formaldehyde content of the recycled boards. The results indicated that when the 1st and 2nd particleboard recycling processes were compared, it was the 2nd recycling process that caused the strongest deterioration in the quality of the recycled boards. Further research is needed in order to explain the causes of the recycled board quality falloff and also to determine the factors in the recycling process that influence the quality degradation of the recycled boards.",
"title": ""
}
] |
scidocsrr
|
0d503425a6cfad63bab8ef9673adc50f
|
Differential privacy and robust statistics
|
[
{
"docid": "7c449b9714d937dc6a3367a851130c4a",
"text": "We study the role that privacy-preserving algorithms, which prevent the leakage of specific information about participants, can play in the design of mechanisms for strategic agents, which must encourage players to honestly report information. Specifically, we show that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. More precisely, mechanisms with differential privacy are approximate dominant strategy under arbitrary player utility functions, are automatically resilient to coalitions, and easily allow repeatability. We study several special cases of the unlimited supply auction problem, providing new results for digital goods auctions, attribute auctions, and auctions with arbitrary structural constraints on the prices. As an important prelude to developing a privacy-preserving auction mechanism, we introduce and study a generalization of previous privacy work that accommodates the high sensitivity of the auction setting, where a single participant may dramatically alter the optimal fixed price, and a slight change in the offered price may take the revenue from optimal to zero.",
"title": ""
}
] |
[
{
"docid": "52b55dab6e8a364bf0bcf05787e5b1ef",
"text": "In adaptive learning systems for distance learning attention is focused on adjusting the learning material to the needs of the individual. Adaptive tests adjust to the current level of knowledge of the examinee and is specific for their needs, thus it is much better at evaluating the knowledge of each individual. The basic goal of adaptive computer tests is to ensure the examinee questions that are challenging enough for them but not too difficult, which would lead to frustration and confusion. The aim of this paper is to present a computer adaptive test (CAT) realized in MATLAB.",
"title": ""
},
{
"docid": "933bc7cc6e1d56969f9d3fc0157f7ac9",
"text": "This paper presents algorithms and techniques for single-sensor tracking and multi-sensor fusion of infrared and radar data. The results show that fusing radar data with infrared data considerably increases detection range, reliability and accuracy of the object tracking. This is mandatory for further development of driver assistance systems. Using multiple model filtering for sensor fusion applications helps to capture the dynamics of maneuvering objects while still achieving smooth object tracking for not maneuvering objects. This is important when safety and comfort systems have to make use of the same sensor information. Comfort systems generally require smoothly filtered data whereas for safety systems it is crucial to capture maneuvers of other road users as fast as possible. Multiple model filtering and probabilistic data association techniques are presented and all presented algorithms are tested in real-time on standard PC systems.",
"title": ""
},
{
"docid": "c2da932aec6f3d8c6fddc9aaa994c9cd",
"text": "As more companies embrace the concepts of sustainable development, there is a need to bring the ideas inherent in eco-efficiency and the \" triple-bottom line \" thinking down to a practical implementation level. Putting this concept into operation requires an understanding of the key indicators of sustainability and how they can be measured to determine if, in fact, progress is being made. Sustainability metrics are intended as simple yardsticks that are applicable across industry. The primary objective of this approach is to improve internal management decision-making with respect to the sustainability of processes, products and services. This approach can be used to make better decisions at any stage of the stage-gate process: from identification of an innovation to design to manufacturing and ultimately to exiting a business. More specifically, sustainability metrics can assist decision makers in setting goals, benchmarking, and comparing alternatives such as different suppliers, raw materials, and improvement options from the sustainability perspective. This paper provides a review on the early efforts and recent progress in the development of sustainability metrics. The experience of BRIDGES to Sustainability™, a not-for-profit organization, in testing, adapting, and refining the sustainability metrics are summarized. Basic and complementary metrics under six impact categories: material, energy, water, solid wastes, toxic release, and pollutant effects, are discussed. The development of BRIDGESworks™ Metrics, a metrics management software tool, is also presented. The software was designed to be both easy to use and flexible. It incorporates a base set of metrics and their heuristics for calculation, as well as a robust set of impact assessment data for use in identifying pollutant effects. While providing a metrics management starting point, the user has the option of creating other metrics defined by the user. The sustainability metrics work at BRIDGES to Sustainability™ was funded partially by the U.S. Department of Energy through a subcontract with the American Institute of Chemical Engineers and through corporate pilots.",
"title": ""
},
{
"docid": "c638fe67f5d4b6e04a37e216edb849fa",
"text": "An exceedingly large number of scientific and engineering fields are confronted with the need for computer simulations to study complex, real world phenomena or solve challenging design problems. However, due to the computational cost of these high fidelity simulations, the use of neural networks, kernel methods, and other surrogate modeling techniques have become indispensable. Surrogate models are compact and cheap to evaluate, and have proven very useful for tasks such as optimization, design space exploration, prototyping, and sensitivity analysis. Consequently, in many fields there is great interest in tools and techniques that facilitate the construction of such regression models, while minimizing the computational cost and maximizing model accuracy. This paper presents a mature, flexible, and adaptive machine learning toolkit for regression modeling and active learning to tackle these issues. The toolkit brings together algorithms for data fitting, model selection, sample selection (active learning), hyperparameter optimization, and distributed computing in order to empower a domain expert to efficiently generate an accurate model for the problem or data at hand.",
"title": ""
},
{
"docid": "8185da1a497e25f0c50e789847b6bd52",
"text": "We address numerical versus experimental design and testing of miniature implantable antennas for biomedical telemetry in the medical implant communications service band (402-405 MHz). A model of a novel miniature antenna is initially proposed for skin implantation, which includes varying parameters to deal with fabrication-specific details. An iterative design-and-testing methodology is further suggested to determine the parameter values that minimize deviations between numerical and experimental results. To assist in vitro testing, a low-cost technique is proposed for reliably measuring the electric properties of liquids without requiring commercial equipment. Validation is performed within a specific prototype fabrication/testing approach for miniature antennas. To speed up design while providing an antenna for generic skin implantation, investigations are performed inside a canonical skin-tissue model. Resonance, radiation, and safety performance of the proposed antenna is finally evaluated inside an anatomical head model. This study provides valuable insight into the design of implantable antennas, assessing the significance of fabrication-specific details in numerical simulations and uncertainties in experimental testing for miniature structures. The proposed methodology can be applied to optimize antennas for several fabrication/testing approaches and biotelemetry applications.",
"title": ""
},
{
"docid": "f738f79a9d516389e1ad0c7343d525c4",
"text": "The I-V curves for Schottky diodes with two different contact areas and geometries fabricated through 1.2 μm CMOS process are presented. These curves are described applying the analysis and practical layout design. It takes into account the resistance, capacitance and reverse breakdown voltage in the semiconductor structure and the dependence of these parameters to improve its operation. The described diodes are used for a charge pump circuit implementation.",
"title": ""
},
{
"docid": "ae9fb1b7ff6821dd29945f768426d7fc",
"text": "Congestive heart failure (CHF) is a leading cause of death in the United States affecting approximately 670,000 individuals. Due to the prevalence of CHF related issues, it is prudent to seek out methodologies that would facilitate the prevention, monitoring, and treatment of heart disease on a daily basis. This paper describes WANDA (Weight and Activity with Blood Pressure Monitoring System); a study that leverages sensor technologies and wireless communications to monitor the health related measurements of patients with CHF. The WANDA system is a three-tier architecture consisting of sensors, web servers, and back-end databases. The system was developed in conjunction with the UCLA School of Nursing and the UCLA Wireless Health Institute to enable early detection of key clinical symptoms indicative of CHF-related decompensation. This study shows that CHF patients monitored by WANDA are less likely to have readings fall outside a healthy range. In addition, WANDA provides a useful feedback system for regulating readings of CHF patients.",
"title": ""
},
{
"docid": "0ce7465e40b3b13e5c316fb420a766d9",
"text": "We have been developing ldquoSmart Suitrdquo as a soft and light-weight wearable power assist system. A prototype for preventing low-back injury in agricultural works and its semi-active assist mechanism have been developed in the previous study. The previous prototype succeeded to reduce about 14% of average muscle fatigues of body trunk in waist extension/flexion motion. In this paper, we describe a prototype of smart suit for supporting waist and knee joint, and its control method for preventing the displacement of the adjustable assist force mechanism in order to keep the assist efficiency.",
"title": ""
},
{
"docid": "4264c3ed6ea24a896377a7efa2b425b0",
"text": "The pervasiveness of Web 2.0 and social networking sites has enabled people to interact with each other easily through various social media. For instance, popular sites like Del.icio.us, Flickr, and YouTube allow users to comment on shared content (bookmarks, photos, videos), and users can tag their favorite content. Users can also connect with one another, and subscribe to or become a fan or a follower of others. These diverse activities result in a multi-dimensional network among actors, forming group structures with group members sharing similar interests or affiliations. This work systematically addresses two challenges. First, it is challenging to effectively integrate interactions over multiple dimensions to discover hidden community structures shared by heterogeneous interactions. We show that representative community detection methods for single-dimensional networks can be presented in a unified view. Based on this unified view, we present and analyze four possible integration strategies to extend community detection from single-dimensional to multi-dimensional networks. In particular, we propose a novel integration scheme based on structural features. Another challenge is the evaluation of different methods without ground truth information about community membership. We employ a novel cross-dimension network validation procedure to compare the performance of different methods. We use synthetic data to deepen our understanding, and real-world data to compare integration strategies as well as baseline methods in a large scale. We study further the computational time of different methods, normalization effect during integration, sensitivity to related parameters, and alternative community detection methods for integration. Lei Tang, Xufei Wang, Huan Liu Computer Science and Engineering, Arizona State University, Tempe, AZ 85287, USA E-mail: {L.Tang, Xufei.Wang, [email protected]} Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 2010 2. REPORT TYPE 3. DATES COVERED 00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Community Detection in Multi-Dimensional Networks 5a. CONTRACT NUMBER",
"title": ""
},
{
"docid": "cac7822c1a40b406c998449e2664815f",
"text": "This paper demonstrates the possibility and feasibility of an ultralow-cost antenna-in-package (AiP) solution for the upcoming generation of wireless local area networks (WLANs) denoted as IEEE802.11ad. The iterative design procedure focuses on maximally alleviating the inherent disadvantages of high-volume FR4 process at 60 GHz such as its relatively high material loss and fabrication restrictions. Within the planar antenna package, the antenna element, vertical transition, antenna feedline, and low- and high-speed interfaces are allocated in a vertical schematic. A circular stacked patch antenna renders the antenna package to exhibit 10-dB return loss bandwidth from 57-66 GHz. An embedded coplanar waveguide (CPW) topology is adopted for the antenna feedline and features less than 0.24 dB/mm in unit loss, which is extracted from measured parametric studies. The fabricated single antenna package is 9 mm × 6 mm × 0.404 mm in dimension. A multiple-element antenna package is fabricated, and its feasibility for future phase array applications is studied. Far-field radiation measurement using an inhouse radio-frequency (RF) probe station validates the single-antenna package to exhibit more than 4.1-dBi gain and 76% radiation efficiency.",
"title": ""
},
{
"docid": "adf1cfe981d965f95e783e1f4ed5fc5d",
"text": "PAST research has shown that real-time Twitter data can be used to predict market movement of securities and other financial instruments [1]. The goal of this paper is to prove whether Twitter data relating to cryptocurrencies can be utilized to develop advantageous crypto coin trading strategies. By way of supervised machine learning techniques, our team will outline several machine learning pipelines with the objective of identifying cryptocurrency market movement. The prominent alternative currency examined in this paper is Bitcoin (BTC). Our approach to cleaning data and applying supervised learning algorithms such as logistic regression, Naive Bayes, and support vector machines leads to a final hour-to-hour and day-to-day prediction accuracy exceeding 90%. In order to achieve this result, rigorous error analysis is employed in order to ensure that accurate inputs are utilized at each step of the model. This analysis yields a 25% accuracy increase on average.",
"title": ""
},
{
"docid": "b0741999659724f8fa5dc1117ec86f0d",
"text": "With the rapidly growing scales of statistical problems, subset based communicationfree parallel MCMC methods are a promising future for large scale Bayesian analysis. In this article, we propose a new Weierstrass sampler for parallel MCMC based on independent subsets. The new sampler approximates the full data posterior samples via combining the posterior draws from independent subset MCMC chains, and thus enjoys a higher computational efficiency. We show that the approximation error for the Weierstrass sampler is bounded by some tuning parameters and provide suggestions for choice of the values. Simulation study shows the Weierstrass sampler is very competitive compared to other methods for combining MCMC chains generated for subsets, including averaging and kernel smoothing.",
"title": ""
},
{
"docid": "50d22974ef09d0f02ee05d345e434055",
"text": "We present the exploring/exploiting tree (EET) algorithm for motion planning. The EET planner deliberately trades probabilistic completeness for computational efficiency. This tradeoff enables the EET planner to outperform state-of-the-art sampling-based planners by up to three orders of magnitude. We show that these considerable speedups apply for a variety of challenging real-world motion planning problems. The performance improvements are achieved by leveraging work space information to continuously adjust the sampling behavior of the planner. When the available information captures the planning problem's inherent structure, the planner's sampler becomes increasingly exploitative. When the available information is less accurate, the planner automatically compensates by increasing local configuration space exploration. We show that active balancing of exploration and exploitation based on workspace information can be a key ingredient to enabling highly efficient motion planning in practical scenarios.",
"title": ""
},
{
"docid": "6f709e89edaa619f41335b1a06eb713a",
"text": "Graphene patch microstrip antenna has been investigated for 600 GHz applications. The graphene material introduces a reconfigurable surface conductivity in terahertz frequency band. The input impedance is calculated using the finite integral technique. A five-lumped elements equivalent circuit for graphene patch microstrip antenna has been investigated. The values of the lumped elements equivalent circuit are optimized using the particle swarm optimization techniques. The optimization is performed to minimize the mean square error between the input impedance of the finite integral technique and that calculated by the equivalent circuit model. The effect of varying the graphene material chemical potential and relaxation time on the radiation characteristics of the graphene patch microstrip antenna has been investigated. An improved new equivalent circuit model has been introduced to best fitting the input impedance using a rational function and PSO. The Cauer's realization method is used to synthesize a new lumped-elements equivalent circuits.",
"title": ""
},
{
"docid": "d73d16ff470669b4935e85e2de815cb8",
"text": "As organizations aggressively deploy radio frequency identification systems, activists are increasingly concerned about RFID's potential to invade user privacy. This overview highlights potential threats and how they might be addressed using both technology and public policy.",
"title": ""
},
{
"docid": "a9399439831a970fcce8e0101696325f",
"text": "We describe the design, implementation, and evaluation of EMBERS, an automated, 24x7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the June 2013 protests in Brazil and Feb 2014 violent protests in Venezuela. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings.",
"title": ""
},
{
"docid": "ffdfd49ad9216806ab6a6bf156bb5a87",
"text": "PDDL+ is an extension of PDDL that enables modelling planning domains with mixed discrete-continuous dynamics. In this paper we present a new approach to PDDL+ planning based on Constraint Answer Set Programming (CASP), i.e. ASP rules plus numerical constraints. To the best of our knowledge, ours is the first attempt to link PDDL+ planning and logic programming. We provide an encoding of PDDL+ models into CASP problems. The encoding can handle non-linear hybrid domains, and represents a solid basis for applying logic programming to PDDL+ planning. As a case study, we consider the EZCSP CASP solver and obtain promising results on a set of PDDL+ benchmark problems.",
"title": ""
},
{
"docid": "d5a816dd44d4d95b0d281880f1917831",
"text": "In order to plan a safe maneuver, self-driving vehicles need to understand the intent of other traffic participants. We define intent as a combination of discrete high level behaviors as well as continuous trajectories describing future motion. In this paper we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment. Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reduce reaction time in self-driving applications.",
"title": ""
},
{
"docid": "26a9fb64389a5dbbbd8afdc6af0b6f07",
"text": "specifications of the essential structure of a system. Models in the analysis or preliminary design stages focus on the key concepts and mechanisms of the eventual system. They correspond in certain ways with the final system. But details are missing from the model, which must be added explicitly during the design process. The purpose of the abstract models is to get the high-level pervasive issues correct before tackling the more localized details. These models are intended to be evolved into the final models by a careful process that guarantees that the final system correctly implements the intent of the earlier models. There must be traceability from these essential models to the full models; otherwise, there is no assurance that the final system correctly incorporates the key properties that the essential model sought to show. Essential models focus on semantic intent. They do not need the full range of implementation options. Indeed, low-level performance distinctions often obscure the logical semantics. The path from an essential model to a complete implementation model must be clear and straightforward, however, whether it is generated automatically by a code generator or evolved manually by a designer. Full specifications of a final system. An implementation model includes enough information to build the system. It must include not only the logical semantics of the system and the algorithms, data structures, and mechanisms that ensure proper performance, but also organizational decisions about the system artifacts that are necessary for cooperative work by humans and processing by tools. This kind of model must include constructs for packaging the model for human understanding and for computer convenience. These are not properties of the target application itself. Rather, they are properties of the construction process. Exemplars of typical or possible systems. Well-chosen examples can give insight to humans and can validate system specifications and implementations. Even a large Chapter 2 • The Nature and Purpose of Models 17 collection of examples, however, necessarily falls short of a definitive description. Ultimately, we need models that specify the general case; that is what a program is, after all. Examples of typical data structures, interaction sequences, or object histories can help a human trying to understand a complicated situation, however. Examples must be used with some care. It is logically impossible to induce the general case from a set of examples, but well-chosen prototypes are the way most people think. An example model includes instances rather than general descriptors. It therefore tends to have a different feel than a generic descriptive model. Example models usually use only a subset of the UML constructs, those that deal with instances. Both descriptive models and exemplar models are useful in modeling a system. Complete or partial descriptions of systems. A model can be a complete description of a single system with no outside references. More often, it is organized as a set of distinct, discrete units, each of which may be stored and manipulated separately as a part of the entire description. Such models have “loose ends” that must be bound to other models in a complete system. Because the pieces have coherence and meaning, they can be combined with other pieces in various ways to produce many different systems. Achieving reuse is an important goal of good modeling. Models evolve over time. Models with greater degrees of detail are derived from more abstract models, and more concrete models are derived from more logical models. For example, a model might start as a high-level view of the entire system, with a few key services in brief detail and no embellishments. Over time, much more detail is added and variations are introduced. Also over time, the focus shifts from a front-end, user-centered logical view to a back-end, implementationcentered physical view. As the developers work with a system and understand it better, the model must be iterated at all levels to capture that understanding; it is impossible to understand a large system in a single, linear pass. There is no one “right” form for a model.",
"title": ""
},
{
"docid": "1256f0799ed585092e60b50fb41055be",
"text": "So far, plant identification has challenges for sev eral researchers. Various methods and features have been proposed. However, there are still many approaches could be investigated to develop robust plant identification systems. This paper reports several xperiments in using Zernike moments to build folia ge plant identification systems. In this case, Zernike moments were combined with other features: geometr ic features, color moments and gray-level co-occurrenc e matrix (GLCM). To implement the identifications systems, two approaches has been investigated. Firs t approach used a distance measure and the second u sed Probabilistic Neural Networks (PNN). The results sh ow that Zernike Moments have a prospect as features in leaf identification systems when they are combin ed with other features.",
"title": ""
}
] |
scidocsrr
|
a67fee9575a077eeb977700728c86da6
|
Combining monoSLAM with object recognition for scene augmentation using a wearable camera
|
[
{
"docid": "2aefddf5e19601c8338f852811cebdee",
"text": "This paper presents a system that allows online building of 3D wireframe models through a combination of user interaction and automated methods from a handheld camera-mouse. Crucially, the model being built is used to concurrently compute camera pose, permitting extendable tracking while enabling the user to edit the model interactively. In contrast to other model building methods that are either off-line and/or automated but computationally intensive, the aim here is to have a system that has low computational requirements and that enables the user to define what is relevant (and what is not) at the time the model is being built. OutlinAR hardware is also developed which simply consists of the combination of a camera with a wide field of view lens and a wheeled computer mouse.",
"title": ""
}
] |
[
{
"docid": "088011257e741b8d08a3b44978134830",
"text": "This paper deals with the kinematic and dynamic analyses of the Orthoglide 5-axis, a five-degree-of-freedom manipulator. It is derived from two manipulators: i) the Orthoglide 3-axis; a three dof translational manipulator and ii) the Agile eye; a parallel spherical wrist. First, the kinematic and dynamic models of the Orthoglide 5-axis are developed. The geometric and inertial parameters of the manipulator are determined by means of a CAD software. Then, the required motors performances are evaluated for some test trajectories. Finally, the motors are selected in the catalogue from the previous results.",
"title": ""
},
{
"docid": "8868fe4e0907fc20cc6cbc2b01456707",
"text": "Tracking multiple objects is a challenging task when objects move in groups and occlude each other. Existing methods have investigated the problems of group division and group energy-minimization; however, lacking overall objectgroup topology modeling limits their ability in handling complex object and group dynamics. Inspired with the social affinity property of moving objects, we propose a Graphical Social Topology (GST) model, which estimates the group dynamics by jointly modeling the group structure and the states of objects using a topological representation. With such topology representation, moving objects are not only assigned to groups, but also dynamically connected with each other, which enables in-group individuals to be correctly associated and the cohesion of each group to be precisely modeled. Using well-designed topology learning modules and topology training, we infer the birth/death and merging/splitting of dynamic groups. With the GST model, the proposed multi-object tracker can naturally facilitate the occlusion problem by treating the occluded object and other in-group members as a whole while leveraging overall state transition. Experiments on both RGB and RGB-D datasets confirm that the proposed multi-object tracker improves the state-of-the-arts especially in crowded scenes.",
"title": ""
},
{
"docid": "04853b59abf86a0dd19fdaac09c9a6c4",
"text": "A single color image can contain many cues informative towards different aspects of local geometric structure. We approach the problem of monocular depth estimation by using a neural network to produce a mid-level representation that summarizes these cues. This network is trained to characterize local scene geometry by predicting, at every image location, depth derivatives of different orders, orientations and scales. However, instead of a single estimate for each derivative, the network outputs probability distributions that allow it to express confidence about some coefficients, and ambiguity about others. Scene depth is then estimated by harmonizing this overcomplete set of network predictions, using a globalization procedure that finds a single consistent depth map that best matches all the local derivative distributions. We demonstrate the efficacy of this approach through evaluation on the NYU v2 depth data set.",
"title": ""
},
{
"docid": "ef5f170bef5daf0800e473554d67fa86",
"text": "Morphological segmentation of words is a subproblem of many natural language tasks, including handling out-of-vocabulary (OOV) words in machine translation, more effective information retrieval, and computer assisted vocabulary learning. Previous work typically relies on extensive statistical and semantic analyses to induce legitimate stems and affixes. We introduce a new learning based method and a prototype implementation of a knowledge light system for learning to segment a given word into word parts, including prefixes, suffixes, stems, and even roots. The method is based on the Conditional Random Fields (CRF) model. Evaluation results show that our method with a small set of seed training data and readily available resources can produce fine-grained morphological segmentation results that rival previous work and systems.",
"title": ""
},
{
"docid": "1c11472572758b6f831349ebf6443ad5",
"text": "In this paper, we propose a Switchable Deep Network (SDN) for pedestrian detection. The SDN automatically learns hierarchical features, salience maps, and mixture representations of different body parts. Pedestrian detection faces the challenges of background clutter and large variations of pedestrian appearance due to pose and viewpoint changes and other factors. One of our key contributions is to propose a Switchable Restricted Boltzmann Machine (SRBM) to explicitly model the complex mixture of visual variations at multiple levels. At the feature levels, it automatically estimates saliency maps for each test sample in order to separate background clutters from discriminative regions for pedestrian detection. At the part and body levels, it is able to infer the most appropriate template for the mixture models of each part and the whole body. We have devised a new generative algorithm to effectively pretrain the SDN and then fine-tune it with back-propagation. Our approach is evaluated on the Caltech and ETH datasets and achieves the state-of-the-art detection performance.",
"title": ""
},
{
"docid": "9e591fe1c8bf7a6a3bc4f31d70c9a94f",
"text": "Uploading data streams to a resource-rich cloud server for inner product evaluation, an essential building block in many popular stream applications (e.g., statistical monitoring), is appealing to many companies and individuals. On the other hand, verifying the result of the remote computation plays a crucial role in addressing the issue of trust. Since the outsourced data collection likely comes from multiple data sources, it is desired for the system to be able to pinpoint the originator of errors by allotting each data source a unique secret key, which requires the inner product verification to be performed under any two parties’ different keys. However, the present solutions either depend on a single key assumption or powerful yet practically-inefficient fully homomorphic cryptosystems. In this paper, we focus on the more challenging multi-key scenario where data streams are uploaded by multiple data sources with distinct keys. We first present a novel homomorphic verifiable tag technique to publicly verify the outsourced inner product computation on the dynamic data streams, and then extend it to support the verification of matrix product computation. We prove the security of our scheme in the random oracle model. Moreover, the experimental result also shows the practicability of our design.",
"title": ""
},
{
"docid": "0db28b5ec56259c8f92f6cc04d4c2601",
"text": "The application of neuroscience to marketing, and in particular to the consumer psychology of brands, has gained popularity over the past decade in the academic and the corporate world. In this paper, we provide an overview of the current and previous research in this area and explainwhy researchers and practitioners alike are excited about applying neuroscience to the consumer psychology of brands. We identify critical issues of past research and discuss how to address these issues in future research. We conclude with our vision of the future potential of research at the intersection of neuroscience and consumer psychology. © 2011 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "d8cf2b75936a7c4d5878c3c17ac89074",
"text": "A general principle of sensory processing is that neurons adapt to sustained stimuli by reducing their response over time. Most of our knowledge on adaptation in single cells is based on experiments in anesthetized animals. How responses adapt in awake animals, when stimuli may be behaviorally relevant or not, remains unclear. Here we show that contrast adaptation in mouse primary visual cortex depends on the behavioral relevance of the stimulus. Cells that adapted to contrast under anesthesia maintained or even increased their activity in awake naïve mice. When engaged in a visually guided task, contrast adaptation re-occurred for stimuli that were irrelevant for solving the task. However, contrast adaptation was reversed when stimuli acquired behavioral relevance. Regulation of cortical adaptation by task demand may allow dynamic control of sensory-evoked signal flow in the neocortex.",
"title": ""
},
{
"docid": "b2379dc57ea6ec09400a3e34e79a8d0d",
"text": "We propose that a robot speaks a Hanamogera (a semantic-free speech) when the robot speaks with a person. Hanamogera is semantic-free speech and the sound of the speech is a sound of the words which consists of phonogram characters. The consisted characters can be changed freely because the Hanamogera speech does not have to have any meaning. Each sound of characters of a Hanamogera is thought to have an impression according to the contained consonant/vowel in the characters. The Hanamogera is expected to make a listener feel that the talking with a robot which speaks a Hanamogera is fun because of a sound of the Hanamogera. We conducted an experiment of talking with a NAO and an experiment of evaluating to Hanamogera speeches. The results of the experiment showed that a talking with a Hanamogera speech robot was better fun than a talking with a nodding robot.",
"title": ""
},
{
"docid": "6f0283efa932663c83cc2c63d19fd6cf",
"text": "Most research that explores the emotional state of users of spoken dialog systems does not fully utilize the contextual nature that the dialog structure provides. This paper reports results of machine learning experiments designed to automatically classify the emotional state of user turns using a corpus of 5,690 dialogs collected with the “How May I Help You” spoken dialog system. We show that augmenting standard lexical and prosodic features with contextual features that exploit the structure of spoken dialog and track user state increases classification accuracy by 2.6%.",
"title": ""
},
{
"docid": "21f56bb6edbef3448275a0925bd54b3a",
"text": "Dr. Stephanie L. Cincotta (Psychiatry): A 35-year-old woman was seen in the emergency department of this hospital because of a pruritic rash. The patient had a history of hepatitis C virus (HCV) infection, acne, depression, and drug dependency. She had been in her usual health until 2 weeks before this presentation, when insomnia developed, which she attributed to her loss of a prescription for zolpidem. During the 10 days before this presentation, she reported seeing white “granular balls,” which she thought were mites or larvae, emerging from and crawling on her skin, sheets, and clothing and in her feces, apartment, and car, as well as having an associated pruritic rash. She was seen by her physician, who referred her to a dermatologist for consideration of other possible causes of the persistent rash, such as porphyria cutanea tarda, which is associated with HCV infection. Three days before this presentation, the patient ran out of clonazepam (after an undefined period during which she reportedly took more than the prescribed dose) and had increasing anxiety and insomnia. The same day, she reported seeing “bugs” on her 15-month-old son that were emerging from his scalp and were present on his skin and in his diaper and sputum. The patient scratched her skin and her child’s skin to remove the offending agents. The day before this presentation, she called emergency medical services and she and her child were transported by ambulance to the emergency department of another hospital. A diagnosis of possible cheyletiellosis was made. She was advised to use selenium sulfide shampoo and to follow up with her physician; the patient returned home with her child. On the morning of admission, while bathing her child, she noted that his scalp was turning red and he was crying. She came with her son to the emergency department of this hospital. The patient reported the presence of bugs on her skin, which she attempted to point out to examiners. She acknowledged a habit of picking at her skin since adolescence, which she said had a calming effect. Fourteen months earlier, shortly after the birth of her son, worsening acne developed that did not respond to treatment with topical antimicrobial agents and tretinoin. Four months later, a facial abscess due From the Departments of Psychiatry (S.R.B., N.K.) and Dermatology (D.K.), Massachusetts General Hospital, and the Departments of Psychiatry (S.R.B., N.K.) and Dermatology (D.K.), Harvard Medi‐ cal School — both in Boston.",
"title": ""
},
{
"docid": "d4ab2085eec138f99d4d490b0cbf9e3a",
"text": "A frequency-reconfigurable microstrip slot antenna is proposed. The antenna is capable of frequency switching at six different frequency bands between 2.2 and 4.75 GHz. Five RF p-i-n diode switches are positioned in the slot to achieve frequency reconfigurability. The feed line and the slot are bended to reduce 33% of the original size of the antenna. The biasing circuit is integrated into the ground plane to minimize the parasitic effects toward the performance of the antenna. Simulated and measured results are used to demonstrate the performance of the antenna. The simulated and measured return losses, together with the radiation patterns, are presented and compared.",
"title": ""
},
{
"docid": "c52d31c7ae39d1a7df04140e920a26d2",
"text": "In the past half-decade, Amazon Mechanical Turk has radically changed the way many scholars do research. The availability of a massive, distributed, anonymous crowd of individuals willing to perform general human-intelligence micro-tasks for micro-payments is a valuable resource for researchers and practitioners. This paper addresses the challenges of obtaining quality annotations for subjective judgment oriented tasks of varying difficulty. We design and conduct a large, controlled experiment (N=68,000) to measure the efficacy of selected strategies for obtaining high quality data annotations from non-experts. Our results point to the advantages of person-oriented strategies over process-oriented strategies. Specifically, we find that screening workers for requisite cognitive aptitudes and providing training in qualitative coding techniques is quite effective, significantly outperforming control and baseline conditions. Interestingly, such strategies can improve coder annotation accuracy above and beyond common benchmark strategies such as Bayesian Truth Serum (BTS).",
"title": ""
},
{
"docid": "f8bc67d88bdd9409e2f3dfdc89f6d93c",
"text": "A millimeter-wave CMOS on-chip stacked Marchand balun is presented in this paper. The balun is fabricated using a top pad metal layer as the single-ended port and is stacked above two metal conductors at the next highest metal layer in order to achieve sufficient coupling to function as the differential ports. Strip metal shields are placed underneath the structure to reduce substrate losses. An amplitude imbalance of 0.5 dB is measured with attenuations below 6.5 dB at the differential output ports at 30 GHz. The corresponding phase imbalance is below 5 degrees. The area occupied is 229μm × 229μm.",
"title": ""
},
{
"docid": "3ae3e7f38be2f2d989dde298a64d9ba4",
"text": "A number of compilers exploit the following strategy: translate a term to continuation-passing style (CPS) and optimize the resulting term using a sequence of reductions. Recent work suggests that an alternative strategy is superior: optimize directly in an extended source calculus. We suggest that the appropriate relation between the source and target calculi may be captured by a special case of a Galois connection known as a reflection. Previous work has focused on the weaker notion of an equational correspondence, which is based on equality rather than reduction. We show that Moggi's monad translation and Plotkin's CPS translation can both be regarded as reflections, and thereby strengthen a number of results in the literature.",
"title": ""
},
{
"docid": "b0687c84e408f3db46aa9fba6f9eeeb9",
"text": "Sex estimation is considered as one of the essential parameters in forensic anthropology casework, and requires foremost consideration in the examination of skeletal remains. Forensic anthropologists frequently employ morphologic and metric methods for sex estimation of human remains. These methods are still very imperative in identification process in spite of the advent and accomplishment of molecular techniques. A constant boost in the use of imaging techniques in forensic anthropology research has facilitated to derive as well as revise the available population data. These methods however, are less reliable owing to high variance and indistinct landmark details. The present review discusses the reliability and reproducibility of various analytical approaches; morphological, metric, molecular and radiographic methods in sex estimation of skeletal remains. Numerous studies have shown a higher reliability and reproducibility of measurements taken directly on the bones and hence, such direct methods of sex estimation are considered to be more reliable than the other methods. Geometric morphometric (GM) method and Diagnose Sexuelle Probabiliste (DSP) method are emerging as valid methods and widely used techniques in forensic anthropology in terms of accuracy and reliability. Besides, the newer 3D methods are shown to exhibit specific sexual dimorphism patterns not readily revealed by traditional methods. Development of newer and better methodologies for sex estimation as well as re-evaluation of the existing ones will continue in the endeavour of forensic researchers for more accurate results.",
"title": ""
},
{
"docid": "138ee58ce9d2bcfa14b44642cf9af08b",
"text": "This research is a partial test of Park et al.’s (2008) model to assess the impact of flow and brand equity in 3D virtual worlds. It draws on flow theory as its main theoretical foundation to understand and empirically assess the impact of flow on brand equity and behavioral intention in 3D virtual worlds. The findings suggest that the balance of skills and challenges in 3D virtual worlds influences users’ flow experience, which in turn influences brand equity. Brand equity then increases behavioral intention. The authors also found that the impact of flow on behavioral intention in 3D virtual worlds is indirect because the relationship between them is mediated by brand equity. This research highlights the importance of balancing the challenges posed by 3D virtual world branding sites with the users’ skills to maximize their flow experience and brand equity to increase the behavioral intention associated with the brand.",
"title": ""
},
{
"docid": "1c1775a64703f7276e4843b8afc26117",
"text": "This paper describes a computer vision based system for real-time robust traffic sign detection, tracking, and recognition. Such a framework is of major interest for driver assistance in an intelligent automotive cockpit environment. The proposed approach consists of two components. First, signs are detected using a set of Haar wavelet features obtained from AdaBoost training. Compared to previously published approaches, our solution offers a generic, joint modeling of color and shape information without the need of tuning free parameters. Once detected, objects are efficiently tracked within a temporal information propagation framework. Second, classification is performed using Bayesian generative modeling. Making use of the tracking information, hypotheses are fused over multiple frames. Experiments show high detection and recognition accuracy and a frame rate of approximately 10 frames per second on a standard PC.",
"title": ""
},
{
"docid": "3c530cf20819fe98a1fb2d1ab44dd705",
"text": "This paper presents a novel representation for three-dimensional objects in terms of affine-invariant image patches and their spatial relationships. Multi-view co nstraints associated with groups of patches are combined wit h a normalized representation of their appearance to guide matching and reconstruction, allowing the acquisition of true three-dimensional affine and Euclidean models from multiple images and their recognition in a single photograp h taken from an arbitrary viewpoint. The proposed approach does not require a separate segmentation stage and is applicable to cluttered scenes. Preliminary modeling and recognition results are presented.",
"title": ""
},
{
"docid": "162f080444935117c5125ae8b7c3d51e",
"text": "The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world. Can this linguistic background knowledge improve the generality and efficiency of learned classifiers and control policies? This paper aims to show that using the space of natural language strings as a parameter space is an effective way to capture natural task structure. In a pretraining phase, we learn a language interpretation model that transforms inputs (e.g. images) into outputs (e.g. labels) given natural language descriptions. To learn a new concept (e.g. a classifier), we search directly in the space of descriptions to minimize the interpreter’s loss on training examples. Crucially, our models do not require language data to learn these concepts: language is used only in pretraining to impose structure on subsequent learning. Results on image classification, text editing, and reinforcement learning show that, in all settings, models with a linguistic parameterization outperform those without.1",
"title": ""
}
] |
scidocsrr
|
03bfcb704a6678551c30cf2c18a79645
|
Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets.
|
[
{
"docid": "2871de581ee0efe242438567ca3a57dd",
"text": "The sparsity which is implicit in MR images is exploited to significantly undersample k-space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain-for example, in terms of spatial finite-differences or their wavelet coefficients. According to the recently developed mathematical theory of compressed-sensing, images with a sparse representation can be recovered from randomly undersampled k-space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise-like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo-random variable-density undersampling of phase-encodes. The reconstruction is performed by minimizing the l(1) norm of a transformed image, subject to data fidelity constraints. Examples demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin-echo brain imaging and 3D contrast enhanced angiography.",
"title": ""
}
] |
[
{
"docid": "7f5bc34cd08a09014cff1b07c2cf72d0",
"text": "This paper presents the RF telecommunications system designed for the New Horizons mission, NASA’s planned mission to Pluto, with focus on new technologies developed to meet mission requirements. These technologies include an advanced digital receiver — a mission-enabler for its low DC power consumption at 2.3 W secondary power. The receiver is one-half of a card-based transceiver that is incorporated with other spacecraft functions into an integrated electronics module, providing further reductions in mass and power. Other developments include extending APL’s long and successful flight history in ultrastable oscillators (USOs) with an updated design for lower DC power. These USOs offer frequency stabilities to 1 part in 10, stabilities necessary to support New Horizons’ uplink radio science experiment. In antennas, the 2.1 meter high gain antenna makes use of shaped suband main reflectors to improve system performance and achieve a gain approaching 44 dBic. New Horizons would also be the first deep-space mission to fly a regenerative ranging system, offering up to a 30 dB performance improvement over sequential ranging, especially at long ranges. The paper will provide an overview of the current system design and development and performance details on the new technologies mentioned above. Other elements of the telecommunications system will also be discussed. Note: New Horizons is NASA’s planned mission to Pluto, and has not been approved for launch. All representations made in this paper are contingent on a decision by NASA to go forward with the preparation for and launch of the mission.",
"title": ""
},
{
"docid": "19ad4b01b9e55995ea85e72b9fa100bd",
"text": "This paper describes the integration of the Alice 3D virtual worlds environment into many disciplines in elementary school, middle school and high school. We have developed a wide range of Alice instructional materials including tutorials for both computer science concepts and animation concepts. To encourage the building of more complicated worlds, we have developed template Alice classes and worlds. With our materials, teachers and students are exposed to computing concepts while using Alice to create projects, stories, games and quizzes. These materials were successfully used in the summers 2008 and 2009 in training and working with over 130 teachers.",
"title": ""
},
{
"docid": "1d949b64320fce803048b981ae32ce38",
"text": "In the field of voice therapy, perceptual evaluation is widely used by expert listeners as a way to evaluate pathological and normal voice quality. This approach is understandably subjective as it is subject to listeners’ bias which high interand intra-listeners variability can be found. As such, research on automatic assessment of pathological voices using a combination of subjective and objective analyses emerged. The present study aimed to develop a complementary automatic assessment system for voice quality based on the well-known GRBAS scale by using a battery of multidimensional acoustical measures through Deep Neural Networks. A total of 44 dimensionality parameters including Mel-frequency Cepstral Coefficients, Smoothed Cepstral Peak Prominence and Long-Term Average Spectrum was adopted. In addition, the state-of-the-art automatic assessment system based on Modulation Spectrum (MS) features and GMM classifiers was used as comparison system. The classification results using the proposed method revealed a moderate correlation with subjective GRBAS scores of dysphonic severity, and yielded a better performance than MS-GMM system, with the best accuracy around 81.53%. The findings indicate that such assessment system can be used as an appropriate evaluation tool in determining the presence and severity of voice disorders.",
"title": ""
},
{
"docid": "fe31348bce3e6e698e26aceb8e99b2d8",
"text": "Web-based enterprises process events generated by millions of users interacting with their websites. Rich statistical data distilled from combining such interactions in near real-time generates enormous business value. In this paper, we describe the architecture of Photon, a geographically distributed system for joining multiple continuously flowing streams of data in real-time with high scalability and low latency, where the streams may be unordered or delayed. The system fully tolerates infrastructure degradation and datacenter-level outages without any manual intervention. Photon guarantees that there will be no duplicates in the joined output (at-most-once semantics) at any point in time, that most joinable events will be present in the output in real-time (near-exact semantics), and exactly-once semantics eventually.\n Photon is deployed within Google Advertising System to join data streams such as web search queries and user clicks on advertisements. It produces joined logs that are used to derive key business metrics, including billing for advertisers. Our production deployment processes millions of events per minute at peak with an average end-to-end latency of less than 10 seconds. We also present challenges and solutions in maintaining large persistent state across geographically distant locations, and highlight the design principles that emerged from our experience.",
"title": ""
},
{
"docid": "47e06f5c195d2e1ecb6199b99ef1ee2d",
"text": "We study weakly-supervised video object grounding: given a video segment and a corresponding descriptive sentence, the goal is to localize objects that are mentioned from the sentence in the video. During training, no object bounding boxes are available, but the set of possible objects to be grounded is known beforehand. Existing approaches in the image domain use Multiple Instance Learning (MIL) to ground objects by enforcing matches between visual and semantic features. A naive extension of this approach to the video domain is to treat the entire segment as a bag of spatial object proposals. However, an object existing sparsely across multiple frames might not be detected completely since successfully spotting it from one single frame would trigger a satisfactory match. To this end, we propagate the weak supervisory signal from the segment level to frames that likely contain the target object. For frames that are unlikely to contain the target objects, we use an alternative penalty loss. We also leverage the interactions among objects as a textual guide for the grounding. We evaluate our model on the newlycollected benchmark YouCook2-BoundingBox and show improvements over competitive baselines.",
"title": ""
},
{
"docid": "0957b0617894561ea6d6e85c43cfb933",
"text": "We consider the online metric matching problem. In this prob lem, we are given a graph with edge weights satisfying the triangl e inequality, andk vertices that are designated as the right side of the matchin g. Over time up tok requests arrive at an arbitrary subset of vertices in the gra ph and each vertex must be matched to a right side vertex immediately upon arrival. A vertex cannot be rematched to another vertex once it is matched. The goal is to minimize the total weight of the matching. We give aO(log k) competitive randomized algorithm for the problem. This improves upon the best known guarantee of O(log k) due to Meyerson, Nanavati and Poplawski [19]. It is well known that no deterministic al gorithm can have a competitive less than 2k − 1, and that no randomized algorithm can have a competitive ratio of less than l k.",
"title": ""
},
{
"docid": "c48d3a5d1cf7065de41bf3acfe5f9d0b",
"text": "In this work we perform experiments with the recently published work on Capsule Networks. Capsule Networks have been shown to deliver state of the art performance for MNIST and claim to have greater discriminative power than Convolutional Neural Networks for special tasks, such as recognizing overlapping digits. The authors of Capsule Networks have evaluated datasets with low number of categories, viz. MNIST, CIFAR-10, SVHN among others. We evaluate capsule networks on two datasets viz. Traffic Signals, Food101, and CIFAR10 with less number of iterations, making changes to the architecture to account for RGB images. Traditional techniques like dropout, batch normalization were applied to capsule networks for performance evaluation.",
"title": ""
},
{
"docid": "923a714ed2811e29647870a2694698b1",
"text": "Although weight and activation quantization is an effective approach for Deep Neural Network (DNN) compression and has a lot of potentials to increase inference speed leveraging bit-operations, there is still a noticeable gap in terms of prediction accuracy between the quantized model and the full-precision model. To address this gap, we propose to jointly train a quantized, bit-operation-compatible DNN and its associated quantizers, as opposed to using fixed, handcrafted quantization schemes such as uniform or logarithmic quantization. Our method for learning the quantizers applies to both network weights and activations with arbitrary-bit precision, and our quantizers are easy to train. The comprehensive experiments on CIFAR-10 and ImageNet datasets show that our method works consistently well for various network structures such as AlexNet, VGG-Net, GoogLeNet, ResNet, and DenseNet, surpassing previous quantization methods in terms of accuracy by an appreciable margin. Code available at https://github.com/Microsoft/LQ-Nets",
"title": ""
},
{
"docid": "fe62f8473bed5b26b220874ef448e912",
"text": "Dual stripline routing is more and more widely used in the modern high speed PCB design due to its cost advantage of reduced overall layer count. However, the major challenge of a successful dual stripline design is to handle the additional interferences introduced by the signals on adjacent layers. This paper studies the crosstalk effect of the dual stripline with both parallel and angled routing, and proposes design solutions to tackle the challenge. Analytical and empirical algorithms are proposed to estimate the crosstalk waveforms from multiple aggressors, which provide quick design risk assessment, and the waveform is well correlated to the 3D full wave EM simulation results.",
"title": ""
},
{
"docid": "d33b5c031cf44d3b7a95ca5b0335f91c",
"text": "Straightforward application of Deep Belief Nets (DBNs) to acoustic modeling produces a rich distributed representation of speech data that is useful for recognition and yields impressive results on the speaker-independent TIMIT phone recognition task. However, the first-layer Gaussian-Bernoulli Restricted Boltzmann Machine (GRBM) has an important limitation, shared with mixtures of diagonalcovariance Gaussians: GRBMs treat different components of the acoustic input vector as conditionally independent given the hidden state. The mean-covariance restricted Boltzmann machine (mcRBM), first introduced for modeling natural images, is a much more representationally efficient and powerful way of modeling the covariance structure of speech data. Every configuration of the precision units of the mcRBM specifies a different precision matrix for the conditional distribution over the acoustic space. In this work, we use the mcRBM to learn features of speech data that serve as input into a standard DBN. The mcRBM features combined with DBNs allow us to achieve a phone error rate of 20.5%, which is superior to all published results on speaker-independent TIMIT to date.",
"title": ""
},
{
"docid": "051c530bf9d49bf1066ddf856488dff1",
"text": "This review paper focusses on DESMO-J, a comprehensive and stable Java-based open-source simulation library. DESMO-J is recommended in numerous academic publications for implementing discrete event simulation models for various applications. The library was integrated into several commercial software products. DESMO-J’s functional range and usability is continuously improved by the Department of Informatics of the University of Hamburg (Germany). The paper summarizes DESMO-J’s core functionality and important design decisions. It also compares DESMO-J to other discrete event simulation frameworks. Furthermore, latest developments and new opportunities are addressed in more detail. These include a) improvements relating to the quality and applicability of the software itself, e.g. a port to .NET, b) optional extension packages like visualization libraries and c) new components facilitating a more powerful and flexible simulation logic, like adaption to real time or a compact representation of production chains and similar queuing systems. Finally, the paper exemplarily describes how to apply DESMO-J to harbor logistics and business process modeling, thus providing insights into DESMO-J practice.",
"title": ""
},
{
"docid": "06f8b713ed4020c99403c28cbd1befbc",
"text": "In the last decade, deep learning algorithms have become very popular thanks to the achieved performance in many machine learning and computer vision tasks. However, most of the deep learning architectures are vulnerable to so called adversarial examples. This questions the security of deep neural networks (DNN) for many securityand trust-sensitive domains. The majority of the proposed existing adversarial attacks are based on the differentiability of the DNN cost function. Defence strategies are mostly based on machine learning and signal processing principles that either try to detect-reject or filter out the adversarial perturbations and completely neglect the classical cryptographic component in the defence. In this work, we propose a new defence mechanism based on the second Kerckhoffs’s cryptographic principle which states that the defence and classification algorithm are supposed to be known, but not the key. To be compliant with the assumption that the attacker does not have access to the secret key, we will primarily focus on a gray-box scenario and do not address a white-box one. More particularly, we assume that the attacker does not have direct access to the secret block, but (a) he completely knows the system architecture, (b) he has access to the data used for training and testing and (c) he can observe the output of the classifier for each given input. We show empirically that our system is efficient against most famous state-of-the-art attacks in black-box and gray-box scenarios.",
"title": ""
},
{
"docid": "660ed094efb11b7d39ecfd5b6f2cfc19",
"text": "Protocol reverse engineering has often been a manual process that is considered time-consuming, tedious and error-prone. To address this limitation, a number of solutions have recently been proposed to allow for automatic protocol reverse engineering. Unfortunately, they are either limited in extracting protocol fields due to lack of program semantics in network traces or primitive in only revealing the flat structure of protocol format. In this paper, we present a system called AutoFormat that aims at not only extracting protocol fields with high accuracy, but also revealing the inherently “non-flat”, hierarchical structures of protocol messages. AutoFormat is based on the key insight that different protocol fields in the same message are typically handled in different execution contexts (e.g., the runtime call stack). As such, by monitoring the program execution, we can collect the execution context information for every message byte (annotated with its offset in the entire message) and cluster them to derive the protocol format. We have evaluated our system with more than 30 protocol messages from seven protocols, including two text-based protocols (HTTP and SIP), three binary-based protocols (DHCP, RIP, and OSPF), one hybrid protocol (CIFS/SMB), as well as one unknown protocol used by a real-world malware. Our results show that AutoFormat can not only identify individual message fields automatically and with high accuracy (an average 93.4% match ratio compared with Wireshark), but also unveil the structure of the protocol format by revealing possible relations (e.g., sequential, parallel, and hierarchical) among the message fields. Part of this research has been supported by the National Science Foundation under grants CNS-0716376 and CNS-0716444. The bulk of this work was performed when the first author was visiting George Mason University in Summer 2007.",
"title": ""
},
{
"docid": "201f576423ed88ee97d1505b6d5a4d3f",
"text": "The effectiveness of the treatment of breast cancer depends on its timely detection. An early step in the diagnosis is the cytological examination of breast material obtained directly from the tumor. This work reports on advances in computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies to characterize these biopsies as either benign or malignant. Instead of relying on the accurate segmentation of cell nuclei, the nuclei are estimated by circles using the circular Hough transform. The resulting circles are then filtered to keep only high-quality estimations for further analysis by a support vector machine which classifies detected circles as correct or incorrect on the basis of texture features and the percentage of nuclei pixels according to a nuclei mask obtained using Otsu's thresholding method. A set of 25 features of the nuclei is used in the classification of the biopsies by four different classifiers. The complete diagnostic procedure was tested on 737 microscopic images of fine needle biopsies obtained from patients and achieved 98.51% effectiveness. The results presented in this paper demonstrate that a computerized medical diagnosis system based on our method would be effective, providing valuable, accurate diagnostic information.",
"title": ""
},
{
"docid": "61c6c9f6a0f60333ad2997b15646a096",
"text": "The density of neustonic plastic particles was compared to that of zooplankton in the coastal ocean near Long Beach, California. Two trawl surveys were conducted, one after an extended dry period when there was little land-based runoff, the second shortly after a storm when runoff was extensive. On each survey, neuston samples were collected at five sites along a transect parallel to shore using a manta trawl lined with 333 micro mesh. Average plastic density during the study was 8 pieces per cubic meter, though density after the storm was seven times that prior to the storm. The mass of plastics was also higher after the storm, though the storm effect on mass was less than it was for density, reflecting a smaller average size of plastic particles after the storm. The average mass of plastic was two and a half times greater than that of plankton, and even greater after the storm. The spatial pattern of the ratio also differed before and after a storm. Before the storm, greatest plastic to plankton ratios were observed at two stations closest to shore, whereas after the storm these had the lowest ratios.",
"title": ""
},
{
"docid": "375b2025d7523234bb10f5f16b2b0764",
"text": "In this paper, we present a system including a novel component called programmable aperture and two associated post-processing algorithms for high-quality light field acquisition. The shape of the programmable aperture can be adjusted and used to capture light field at full sensor resolution through multiple exposures without any additional optics and without moving the camera. High acquisition efficiency is achieved by employing an optimal multiplexing scheme, and quality data is obtained by using the two post-processing algorithms designed for self calibration of photometric distortion and for multi-view depth estimation. View-dependent depth maps thus generated help boost the angular resolution of light field. Various post-exposure photographic effects are given to demonstrate the effectiveness of the system and the quality of the captured light field.",
"title": ""
},
{
"docid": "4bf253b2349978d17fd9c2400df61d21",
"text": "This paper proposes an architecture for the mapping between syntax and phonology – in particular, that aspect of phonology that determines the linear ordering of words. We propose that linearization is restricted in two key ways. (1) the relative ordering of words is fixed at the end of each phase, or ‘‘Spell-out domain’’; and (2) ordering established in an earlier phase may not be revised or contradicted in a later phase. As a consequence, overt extraction out of a phase P may apply only if the result leaves unchanged the precedence relations established in P. We argue first that this architecture (‘‘cyclic linearization’’) gives us a means of understanding the reasons for successive-cyclic movement. We then turn our attention to more specific predictions of the proposal: in particular, the e¤ects of Holmberg’s Generalization on Scandinavian Object Shift; and also the Inverse Holmberg Effects found in Scandinavian ‘‘Quantifier Movement’’ constructions (Rögnvaldsson (1987); Jónsson (1996); Svenonius (2000)) and in Korean scrambling configurations (Ko (2003, 2004)). The cyclic linearization proposal makes predictions that cross-cut the details of particular syntactic configurations. For example, whether an apparent case of verb fronting results from V-to-C movement or from ‘‘remnant movement’’ of a VP whose complements have been removed by other processes, the verb should still be required to precede its complements after fronting if it preceded them before fronting according to an ordering established at an earlier phase. We argue that ‘‘cross-construction’’ consistency of this sort is in fact found.",
"title": ""
},
{
"docid": "cb65229a1edd5fc6dc5cf6be7afc1b9e",
"text": "This session studies specific challenges that Machine Learning (ML) algorithms have to tackle when faced with Big Data problems. These challenges can arise when any of the dimensions in a ML problem grows significantly: a) size of training set, b) size of test set or c) dimensionality. The studies included in this edition explore the extension of previous ML algorithms and practices to Big Data scenarios. Namely, specific algorithms for recurrent neural network training, ensemble learning, anomaly detection and clustering are proposed. The results obtained show that this new trend of ML problems presents both a challenge and an opportunity to obtain results which could allow ML to be integrated in many new applications in years to come.",
"title": ""
},
{
"docid": "e29d3ab3d3b9bd6cbff1c2a79a6c3070",
"text": "This paper presents a study of passive Dickson based envelope detectors operating in the quadratic small signal regime, specifically intended to be used in RF front end of sensing units of IoE sensor nodes. Critical parameters such as open-circuit voltage sensitivity (OCVS), charge time, input impedance, and output noise are studied and simplified circuit models are proposed to predict the behavior of the detector, resulting in practical design intuitions. There is strong agreement between model predictions, simulation results and measurements of 15 representative test structures that were fabricated in a 130 nm RF CMOS process.",
"title": ""
},
{
"docid": "510b9b709d8bd40834ed0409d1e83d4d",
"text": "In this paper we describe AntHocNet, an algorithm for routing in mobile ad hoc networks. It is a hybrid algorithm, which combines reactive path setup with proactive path probing, maintenance and improvement. The algorithm is based on the Nature-inspired Ant Colony Optimization framework. Paths are learned by guided Monte Carlo sampling using ant-like agents communicating in a stigmergic way. In an extensive set of simulation experiments, we compare AntHocNet with AODV, a reference algorithm in the field. We show that our algorithm can outperform AODV on different evaluation criteria. AntHocNet’s performance advantage is visible over a broad range of possible network scenarios, and increases for larger, sparser and more mobile networks.",
"title": ""
}
] |
scidocsrr
|
01adc0efe604be82d0916c55a9044287
|
The Latent Relation Mapping Engine: Algorithm and Experiments
|
[
{
"docid": "80db4fa970d0999a43d31d58e23444bb",
"text": "There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This article introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47% on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) The patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent to the average human score of 57%. On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM.",
"title": ""
}
] |
[
{
"docid": "fb4837a619a6b9e49ca2de944ec2314e",
"text": "Inverse reinforcement learning addresses the general problem of recovering a reward function from samples of a policy provided by an expert/demonstrator. In this paper, we introduce active learning for inverse reinforcement learning. We propose an algorithm that allows the agent to query the demonstrator for samples at specific states, instead of relying only on samples provided at “arbitrary” states. The purpose of our algorithm is to estimate the reward function with similar accuracy as other methods from the literature while reducing the amount of policy samples required from the expert. We also discuss the use of our algorithm in higher dimensional problems, using both Monte Carlo and gradient methods. We present illustrative results of our algorithm in several simulated examples of different complexities.",
"title": ""
},
{
"docid": "b49275c9f454cdb0061e0180ac50a04f",
"text": "Implementing controls in the car becomes a major challenge: The use of simple physical buttons does not scale to the increased number of assistive, comfort, and infotainment functions. Current solutions include hierarchical menus and multi-functional control devices, which increase complexity and visual demand. Another option is speech control, which is not widely accepted, as it does not support visibility of actions, fine-grained feedback, and easy undo of actions. Our approach combines speech and gestures. By using speech for identification of functions, we exploit the visibility of objects in the car (e.g., mirror) and simple access to a wide range of functions equaling a very broad menu. Using gestures for manipulation (e.g., left/right), we provide fine-grained control with immediate feedback and easy undo of actions. In a user-centered process, we determined a set of user-defined gestures as well as common voice commands. For a prototype, we linked this to a car interior and driving simulator. In a study with 16 participants, we explored the impact of this form of multimodal interaction on the driving performance against a baseline using physical buttons. The results indicate that the use of speech and gesture is slower than using buttons but results in a similar driving performance. Users comment in a DALI questionnaire that the visual demand is lower when using speech and gestures.",
"title": ""
},
{
"docid": "086f7cf2643450959d575562a67e3576",
"text": "Single image super resolution (SISR) is to reconstruct a high resolution image from a single low resolution image. The SISR task has been a very attractive research topic over the last two decades. In recent years, convolutional neural network (CNN) based models have achieved great performance on SISR task. Despite the breakthroughs achieved by using CNN models, there are still some problems remaining unsolved, such as how to recover high frequency details of high resolution images. Previous CNN based models always use a pixel wise loss, such as l2 loss. Although the high resolution images constructed by these models have high peak signal-to-noise ratio (PSNR), they often tend to be blurry and lack high-frequency details, especially at a large scaling factor. In this paper, we build a super resolution perceptual generative adversarial network (SRPGAN) framework for SISR tasks. In the framework, we propose a robust perceptual loss based on the discriminator of the built SRPGAN model. We use the Charbonnier loss function to build the content loss and combine it with the proposed perceptual loss and the adversarial loss. Compared with other state-of-the-art methods, our method has demonstrated great ability to construct images with sharp edges and rich details. We also evaluate our method on different benchmarks and compare it with previous CNN based methods. The results show that our method can achieve much higher structural similarity index (SSIM) scores on most of the benchmarks than the previous state-of-art methods.",
"title": ""
},
{
"docid": "ba87ca7a07065e25593e6ae5c173669d",
"text": "The intelligence community (IC) is asked to predict outcomes that may often be inherently unpredictable-and is blamed for the inevitable forecasting failures, be they false positives or false negatives. To move beyond blame games of accountability ping-pong that incentivize bureaucratic symbolism over substantive reform, it is necessary to reach bipartisan agreements on performance indicators that are transparent enough to reassure clashing elites (to whom the IC must answer) that estimates have not been politicized. Establishing such transideological credibility requires (a) developing accuracy metrics for decoupling probability and value judgments; (b) using the resulting metrics as criterion variables in validity tests of the IC's selection, training, and incentive systems; and (c) institutionalizing adversarial collaborations that conduct level-playing-field tests of clashing perspectives.",
"title": ""
},
{
"docid": "57290d8e0a236205c4f0ce887ffed3ab",
"text": "We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection.",
"title": ""
},
{
"docid": "f7c4b71b970b7527cd2650ce1e05ab1b",
"text": "BACKGROUND\nPhysician burnout has reached epidemic levels, as documented in national studies of both physicians in training and practising physicians. The consequences are negative effects on patient care, professionalism, physicians' own care and safety, and the viability of health-care systems. A more complete understanding than at present of the quality and outcomes of the literature on approaches to prevent and reduce burnout is necessary.\n\n\nMETHODS\nIn this systematic review and meta-analysis, we searched MEDLINE, Embase, PsycINFO, Scopus, Web of Science, and the Education Resources Information Center from inception to Jan 15, 2016, for studies of interventions to prevent and reduce physician burnout, including single-arm pre-post comparison studies. We required studies to provide physician-specific burnout data using burnout measures with validity support from commonly accepted sources of evidence. We excluded studies of medical students and non-physician health-care providers. We considered potential eligibility of the abstracts and extracted data from eligible studies using a standardised form. Outcomes were changes in overall burnout, emotional exhaustion score (and high emotional exhaustion), and depersonalisation score (and high depersonalisation). We used random-effects models to calculate pooled mean difference estimates for changes in each outcome.\n\n\nFINDINGS\nWe identified 2617 articles, of which 15 randomised trials including 716 physicians and 37 cohort studies including 2914 physicians met inclusion criteria. Overall burnout decreased from 54% to 44% (difference 10% [95% CI 5-14]; p<0·0001; I2=15%; 14 studies), emotional exhaustion score decreased from 23·82 points to 21·17 points (2·65 points [1·67-3·64]; p<0·0001; I2=82%; 40 studies), and depersonalisation score decreased from 9·05 to 8·41 (0·64 points [0·15-1·14]; p=0·01; I2=58%; 36 studies). High emotional exhaustion decreased from 38% to 24% (14% [11-18]; p<0·0001; I2=0%; 21 studies) and high depersonalisation decreased from 38% to 34% (4% [0-8]; p=0·04; I2=0%; 16 studies).\n\n\nINTERPRETATION\nThe literature indicates that both individual-focused and structural or organisational strategies can result in clinically meaningful reductions in burnout among physicians. Further research is needed to establish which interventions are most effective in specific populations, as well as how individual and organisational solutions might be combined to deliver even greater improvements in physician wellbeing than those achieved with individual solutions.\n\n\nFUNDING\nArnold P Gold Foundation Research Institute.",
"title": ""
},
{
"docid": "7182dfe75bc09df526da51cd5c8c8d20",
"text": "Rapid progress has been made towards question answering (QA) systems that can extract answers from text. Existing neural approaches make use of expensive bidirectional attention mechanisms or score all possible answer spans, limiting scalability. We propose instead to cast extractive QA as an iterative search problem: select the answer’s sentence, start word, and end word. This representation reduces the space of each search step and allows computation to be conditionally allocated to promising search paths. We show that globally normalizing the decision process and back-propagating through beam search makes this representation viable and learning efficient. We empirically demonstrate the benefits of this approach using our model, Globally Normalized Reader (GNR), which achieves the second highest single model performance on the Stanford Question Answering Dataset (68.4 EM, 76.21 F1 dev) and is 24.7x faster than bi-attention-flow. We also introduce a data-augmentation method to produce semantically valid examples by aligning named entities to a knowledge base and swapping them with new entities of the same type. This method improves the performance of all models considered in this work and is of independent interest for a variety of NLP tasks.",
"title": ""
},
{
"docid": "2af4d946d00b37ec0f6d37372c85044b",
"text": "Training of discrete latent variable models remains challenging because passing gradient information through discrete units is difficult. We propose a new class of smoothing transformations based on a mixture of two overlapping distributions, and show that the proposed transformation can be used for training binary latent models with either directed or undirected priors. We derive a new variational bound to efficiently train with Boltzmann machine priors. Using this bound, we develop DVAE++, a generative model with a global discrete prior and a hierarchy of convolutional continuous variables. Experiments on several benchmarks show that overlapping transformations outperform other recent continuous relaxations of discrete latent variables including Gumbel-Softmax (Maddison et al., 2016; Jang et al., 2016), and discrete variational autoencoders (Rolfe, 2016).",
"title": ""
},
{
"docid": "912c213d76bed8d90f636ea5a6220cf1",
"text": "Across the world, organizations have teams gathering threat data to protect themselves from incoming cyber attacks and maintain a strong cyber security posture. Teams are also sharing information, because along with the data collected internally, organizations need external information to have a comprehensive view of the threat landscape. The information about cyber threats comes from a variety of sources, including sharing communities, open-source and commercial sources, and it spans many different levels and timescales. Immediately actionable information are often low-level indicators of compromise, such as known malware hash values or command-and-control IP addresses, where an actionable response can be executed automatically by a system. Threat intelligence refers to more complex cyber threat information that has been acquired or inferred through the analysis of existing information. Information such as the different malware families used over time with an attack or the network of threat actors involved in an attack, is valuable information and can be vital to understanding and predicting attacks, threat developments, as well as informing law enforcement investigations. This information is also actionable, but on a longer time scale. Moreover, it requires action and decision-making at the human level. There is a need for effective intelligence management platforms to facilitate the generation, refinement, and vetting of data, post sharing. In designing such a system, some of the key challenges that exist include: working with multiple intelligence sources, combining and enriching data for greater intelligence, determining intelligence relevance based on technical constructs, and organizational input, delivery into organizational workflows and into technological products. This paper discusses these challenges encountered and summarizes the community requirements and expectations for an all-encompassing Threat Intelligence Management Platform. The requirements expressed in this paper, when implemented, will serve as building blocks to create systems that can maximize value out of a set of collected intelligence and translate those findings into action for a broad range of stakeholders.",
"title": ""
},
{
"docid": "3a98dd611afcfd6d51c319bde3b84cc9",
"text": "This note provides a family of classification problems, indexed by a positive integer k, where all shallow networks with fewer than exponentially (in k) many nodes exhibit error at least 1/3, whereas a deep network with 2 nodes in each of 2k layers achieves zero error, as does a recurrent network with 3 distinct nodes iterated k times. The proof is elementary, and the networks are standard feedforward networks with ReLU (Rectified Linear Unit) nonlinearities.",
"title": ""
},
{
"docid": "2f5776d8ce9714dcee8d458b83072f74",
"text": "The componential theory of creativity is a comprehensive model of the social and psychological components necessary for an individual to produce creative work. The theory is grounded in a definition of creativity as the production of ideas or outcomes that are both novel and appropriate to some goal. In this theory, four components are necessary for any creative response: three components within the individual – domainrelevant skills, creativity-relevant processes, and intrinsic task motivation – and one component outside the individual – the social environment in which the individual is working. The current version of the theory encompasses organizational creativity and innovation, carrying implications for the work environments created by managers. This entry defines the components of creativity and how they influence the creative process, describing modifications to the theory over time. Then, after comparing the componential theory to other creativity theories, the article describes this theory’s evolution and impact.",
"title": ""
},
{
"docid": "c3566171b68e4025931a72064e74e4ae",
"text": "Training a Fully Convolutional Network (FCN) for semantic segmentation requires a large number of pixel-level masks, which involves a large amount of human labour and time for annotation. In contrast, image-level labels are much easier to obtain. In this work, we propose a novel method for weakly supervised semantic segmentation with only image-level labels. The method relies on a large scale co-segmentation framework that can produce object masks for a group of images containing objects belonging to the same semantic class. We first retrieve images from search engines, e.g. Flickr and Google, using semantic class names as queries, e.g. class names in PASCAL VOC 2012. We then use high quality masks produced by co-segmentation on the retrieved images as well as the target dataset images with image level labels to train segmentation networks. We obtain IoU 56.9 on test set of PASCAL VOC 2012, which reaches state of the art performance.",
"title": ""
},
{
"docid": "56c42f370442a5ec485e9f1d719d7141",
"text": "The computation of page importance in a huge dynamic graph has recently attracted a lot of attention because of the web. Page importance or page rank is defined as the fixpoint of a matrix equation. Previous algorithms compute it off-line and require the use of a lot of extra CPU as well as disk resources in particular to store and maintain the link matrix of the web. We briefly discuss a new algorithm that works on-line, and uses much less resources. In particular, it does not require storing the link matrix. It is on-line in that it continuously refines its estimate of page importance while the web/graph is visited. When the web changes, page importance changes as well. We modify the algorithm so that it adapts dynamically to changes of the web. We report on experiments on web data and on synthetic data.",
"title": ""
},
{
"docid": "bb444221c5a8eefad3e2a9a175bfccbc",
"text": "This paper presents new experimental results of angle of arrival (AoA) measurements for localizing passive RFID tags in the UHF frequency range. The localization system is based on the principle of a phased array with electronic beam steering mechanism. This approach has been successfully applied within a UHF RFID system and it allows the precise determination of the angle and the position of small passive RFID tags. The paper explains the basic principle, the experimental setup with the phased array and shows results of the measurements.",
"title": ""
},
{
"docid": "b71197073ea33bb8c61973e8cd7d2775",
"text": "This paper discusses the latest developments in the optimization and fabrication of 3.3kV SiC vertical DMOSFETs. The devices show superior on-state and switching losses compared to the even the latest generation of 3.3kV fast Si IGBTs and promise to extend the upper switching frequency of high-voltage power conversion systems beyond several tens of kHz without the need to increase part count with 3-level converter stacks of faster 1.7kV IGBTs.",
"title": ""
},
{
"docid": "6a5e0e30eb5b7f2efe76e0e58e04ae4a",
"text": "We propose an approach to learn spatio-temporal features in videos from intermediate visual representations we call “percepts” using Gated-Recurrent-Unit Recurrent Networks (GRUs). Our method relies on percepts that are extracted from all levels of a deep convolutional network trained on the large ImageNet dataset. While high-level percepts contain highly discriminative information, they tend to have a low-spatial resolution. Low-level percepts, on the other hand, preserve a higher spatial resolution from which we can model finer motion patterns. Using low-level percepts, however, can lead to high-dimensionality video representations. To mitigate this effect and control the number of parameters, we introduce a variant of the GRU model that leverages the convolution operations to enforce sparse connectivity of the model units and share parameters across the input spatial locations. We empirically validate our approach on both Human Action Recognition and Video Captioning tasks. In particular, we achieve results equivalent to state-of-art on the YouTube2Text dataset using a simpler caption-decoder model and without extra 3D CNN features.",
"title": ""
},
{
"docid": "a5f557ddac63cd24a11c1490e0b4f6d4",
"text": "Continuous opinion dynamics optimizer (CODO) is an algorithm based on human collective opinion formation process for solving continuous optimization problems. In this paper, we have studied the impact of topology and introduction of leaders in the society on the optimization performance of CODO. We have introduced three new variants of CODO and studied the efficacy of algorithms on several benchmark functions. Experimentation demonstrates that scale free CODO performs significantly better than all algorithms. Also, the role played by individuals with different degrees during the optimization process is studied.",
"title": ""
},
{
"docid": "da5562859bfed0057e0566679a4aca3d",
"text": "Machine-to-Machine (M2M) paradigm enables machines (sensors, actuators, robots, and smart meter readers) to communicate with each other with little or no human intervention. M2M is a key enabling technology for the cyber-physical systems (CPSs). This paper explores CPS beyond M2M concept and looks at futuristic applications. Our vision is CPS with distributed actuation and in-network processing. We describe few particular use cases that motivate the development of the M2M communication primitives tailored to large-scale CPS. M2M communications in literature were considered in limited extent so far. The existing work is based on small-scale M2M models and centralized solutions. Different sources discuss different primitives. Few existing decentralized solutions do not scale well. There is a need to design M2M communication primitives that will scale to thousands and trillions of M2M devices, without sacrificing solution quality. The main paradigm shift is to design localized algorithms, where CPS nodes make decisions based on local knowledge. Localized coordination and communication in networked robotics, for matching events and robots, were studied to illustrate new directions.",
"title": ""
},
{
"docid": "d3eff4c249e464e9e571d80d4fe95bbd",
"text": "CONIKS is a proposed key transparency system which enables a centralized service provider to maintain an auditable yet privacypreserving directory of users’ public keys. In the original CONIKS design, users must monitor that their data is correctly included in every published snapshot of the directory, necessitating either slow updates or trust in an unspecified third-party to audit that the data structure has stayed consistent. We demonstrate that the data structures for CONIKS are very similar to those used in Ethereum, a consensus computation platform with a Turing-complete programming environment. We can take advantage of this to embed the core CONIKS data structures into an Ethereum contract with only minor modifications. Users may then trust the Ethereum network to audit the data structure for consistency and non-equivocation. Users who do not trust (or are unaware of) Ethereum can self-audit the CONIKS data structure as before. We have implemented a prototype contract for our hybrid EthIKS scheme, demonstrating that it adds only modest bandwidth overhead to CONIKS proofs and costs hundredths of pennies per key update in fees at today’s rates.",
"title": ""
},
{
"docid": "8921cffb633b0ea350b88a57ef0d4437",
"text": "This paper addresses the problem of identifying likely topics of texts by their position in the text. It describes the automated training and evaluation of an Optimal Position Policy, a method of locating the likely positions of topic-bearing sentences based on genre-speci c regularities of discourse structure. This method can be used in applications such as information retrieval, routing, and text summarization.",
"title": ""
}
] |
scidocsrr
|
01eb7e40fc907559056c1c5eb1c04c12
|
Data Mining Model for Predicting Student Enrolment in STEM Courses in Higher Education Institutions
|
[
{
"docid": "f7a36f939cbe9b1d403625c171491837",
"text": "This paper explores the socio-demographic variables (age, gender, ethnicity, education, work status, and disability) and study environment (course programme and course block), that may influence persistence or dropout of students at the Open Polytechnic of New Zealand. We examine to what extent these factors, i.e. enrolment data help us in pre-identifying successful and unsuccessful students. The data stored in the Open Polytechnic student management system from 2006 to 2009, covering over 450 students who enrolled to 71150 Information Systems course was used to perform a quantitative analysis of study outcome. Based on a data mining techniques (such as feature selection and classification trees), the most important factors for student success and a profile of the typical successful and unsuccessful students are identified. The empirical results show the following: (i) the most important factors separating successful from unsuccessful students are: ethnicity, course programme and course block; (ii) among classification tree growing methods Classification and Regression Tree (CART) was the most successful in growing the tree with an overall percentage of correct classification of 60.5%; and (iii) both the risk estimated by the cross-validation and the gain diagram suggests that all trees, based only on enrolment data are not quite good in separating successful from unsuccessful students. The implications of these results for academic and administrative staff are discussed.",
"title": ""
},
{
"docid": "055faaaa14959a204ca19a4962f6e822",
"text": "Data mining (also known as knowledge discovery from databases) is the process of extraction of hidden, previously unknown and potentially useful information from databases. The outcome of the extracted data can be analyzed for the future planning and development perspectives. In this paper, we have made an attempt to demonstrate how one can extract the local (district) level census, socio-economic and population related other data for knowledge discovery and their analysis using the powerful data mining tool Weka. I. DATA MINING Data mining has been defined as the nontrivial extraction of implicit, previously unknown, and potentially useful information from databases/data warehouses. It uses machine learning, statistical and visualization techniques to discover and present knowledge in a form, which is easily comprehensive to humans [1]. Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help user focus on the most important information in their data warehouses. Data mining tools predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by retrospective tools typical of decision support systems. Data mining tools can answer business questions that traditionally were too time consuming to resolve. They scour databases for hidden patterns, finding predictive information that experts may miss because it lies outside their expectations. Data mining techniques can be implemented rapidly on existing software and hardware platforms to enhance the value of existing information resources, and can be integrated with new products and systems as they are brought on-line [2]. Data mining steps in the knowledge discovery process are as follows: 1. Data cleaningThe removal of noise and inconsistent data. 2. Data integration The combination of multiple sources of data. 3. Data selection The data relevant for analysis is retrieved from the database. 4. Data transformation The consolidation and transformation of data into forms appropriate for mining. 5. Data mining The use of intelligent methods to extract patterns from data. 6. Pattern evaluation Identification of patterns that are interesting. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 36 7. Knowledge presentation Visualization and knowledge representation techniques are used to present the extracted or mined knowledge to the end user [3]. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps [7][8]. II. WEKA: Weka (Waikato Environment for Knowledge Analysis) is a popular suite of machine learning software written in Java, developed at the University of Waikato, New Zealand. Weka is free software available under the GNU General Public License. The Weka workbench contains a collection of visualization tools and algorithms for data analysis and predictive modeling, together with graphical user interfaces for easy access to this functionality [4]. Weka is a collection of machine learning algorithms for solving real-world data mining problems. It is written in Java and runs on almost any platform. The algorithms can either be applied directly to a dataset or called from your own Java code [5]. The original non-Java version of Weka was a TCL/TK front-end to (mostly third-party) modeling algorithms implemented in other programming languages, plus data preprocessing utilities in C, and a Makefile-based system for running machine learning experiments. This original version was primarily designed as a tool for analyzing data from agricultural domains, but the more recent fully Java-based version (Weka 3), for which development started in 1997, is now used in many different application areas, in particular for educational purposes and research. Advantages of Weka include: I. Free availability under the GNU General Public License II. Portability, since it is fully implemented in the Java programming language and thus runs on almost any modern computing platform III. A comprehensive collection of data preprocessing and modeling techniques IV. Ease of use due to its graphical user interfaces Weka supports several standard data mining tasks, more specifically, data preprocessing, clustering, classification, regression, visualization, and feature selection [10]. All of Weka's techniques are predicated on the assumption that the data is available as a single flat file or relation, where each data point is described by a fixed number of attributes (normally, numeric or nominal attributes, but some other attribute types are also supported). Weka provides access to SQL databases using Java Database Connectivity and can process the result returned by a database query. It is not capable of multi-relational data mining, but there is separate software for converting a collection of linked database tables into a single table that is suitable for (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 37 processing using Weka. Another important area that is currently not covered by the algorithms included in the Weka distribution is sequence modeling [4]. III. DATA PROCESSING, METHODOLOGY AND RESULTS The primary available data such as census (2001), socio-economic data, and few basic information of Latur district are collected from National Informatics Centre (NIC), Latur, which is mainly required to design and develop the database for Latur district of Maharashtra state of India. The database is designed in MS-Access 2003 database management system to store the collected data. The data is formed according to the required format and structures. Further, the data is converted to ARFF (Attribute Relation File Format) format to process in WEKA. An ARFF file is an ASCII text file that describes a list of instances sharing a set of attributes. ARFF files were developed by the Machine Learning Project at the Department of Computer Science of The University of Waikato for use with the Weka machine learning software. This document descibes the version of ARFF used with Weka versions 3.2 to 3.3; this is an extension of the ARFF format as described in the data mining book written by Ian H. Witten and Eibe Frank [6][9]. After processing the ARFF file in WEKA the list of all attributes, statistics and other parameters can be utilized as shown in Figure 1. Fig.1 Processed ARFF file in WEKA. In the above shown file, there are 729 villages data is processed with different attributes (25) like population, health, literacy, village locations etc. Among all these, few of them are preprocessed attributes generated by census data like, percent_male_literacy, total_percent_literacy, total_percent_illiteracy, sex_ratio etc. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 38 The processed data in Weka can be analyzed using different data mining techniques like, Classification, Clustering, Association rule mining, Visualization etc. algorithms. The Figure 2 shows the few processed attributes which are visualized into a 2 dimensional graphical representation. Fig. 2 Graphical visualization of processed attributes. The information can be extracted with respect to two or more associative relation of data set. In this process, we have made an attempt to visualize the impact of male and female literacy on the gender inequality. The literacy related and population data is processed and computed the percent wise male and female literacy. Accordingly we have computed the sex ratio attribute from the given male and female population data. The new attributes like, male_percent_literacy, female_percent_literacy and sex_ratio are compared each other to extract the impact of literacy on gender inequality. The Figure 3 and Figure 4 are the extracted results of sex ratio values with male and female literacy. Fig. 3 Female literacy and Sex ratio values. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 39 Fig. 4 Male literacy and Sex ratio values. On the Y-axis, the female percent literacy values are shown in Figure 3, and the male percent literacy values are shown in Figure 4. By considering both the results, the female percent literacy is poor than the male percent literacy in the district. The sex ratio values are higher in male percent literacy than the female percent literacy. The results are purely showing that the literacy is very much important to manage the gender inequality of any region. ACKNOWLEDGEMENT: Authors are grateful to the department of NIC, Latur for providing all the basic data and WEKA for providing such a strong tool to extract and analyze knowledge from database. CONCLUSION Knowledge extraction from database is becom",
"title": ""
},
{
"docid": "120452d49d476366abcb52b86d8110b5",
"text": "Many companies like credit card, insurance, bank, retail industry require direct marketing. Data mining can help those institutes to set marketing goal. Data mining techniques have good prospects in their target audiences and improve the likelihood of response. In this work we have investigated two data mining techniques: the Naïve Bayes and the C4.5 decision tree algorithms. The goal of this work is to predict whether a client will subscribe a term deposit. We also made comparative study of performance of those two algorithms. Publicly available UCI data is used to train and test the performance of the algorithms. Besides, we extract actionable knowledge from decision tree that focuses to take interesting and important decision in business area.",
"title": ""
}
] |
[
{
"docid": "a7317f06cf34e501cb169bdf805e7e34",
"text": "It's natural to promote your best and brightest, especially when you think they may leave for greener pastures if you don't continually offer them new challenges and rewards. But promoting smart, ambitious young managers too quickly often robs them of the chance to develop the emotional competencies that come with time and experience--competencies like the ability to negotiate with peers, regulate emotions in times of crisis, and win support for change. Indeed, at some point in a manager's career--usually at the vice president level--raw talent and ambition become less important than the ability to influence and persuade, and that's the point at which the emotionally immature manager will lose his effectiveness. This article argues that delaying a promotion can sometimes be the best thing a senior executive can do for a junior manager. The inexperienced manager who is given time to develop his emotional competencies may be better prepared for the interpersonal demands of top-level leadership. The authors recommend that senior executives employ these strategies to help boost their protégés' people skills: sharpen the 360-degree feedback process, give managers cross-functional assignments to improve their negotiation skills, make the development of emotional competencies mandatory, make emotional competencies a performance measure, and encourage managers to develop informal learning partnerships with peers and mentors. Delaying a promotion can be difficult given the steadfast ambitions of many junior executives and the hectic pace of organizational life. It may mean going against the norm of promoting people almost exclusively on smarts and business results. It may also mean contending with the disappointment of an esteemed subordinate. But taking the time to build people's emotional competencies isn't an extravagance; it's critical to developing effective leaders.",
"title": ""
},
{
"docid": "64139426292bc1744904a0758b6caed1",
"text": "The quantity and complexity of available information is rapidly increasing. This potential information overload challenges the standard information retrieval models, as users find it increasingly difficult to find relevant information. We therefore propose a method that can utilize the potentially valuable knowledge contained in concept models such as ontologies, and thereby assist users in querying, using the terminology of the domain. The primary focus of this dissertation is similarity measures for use in ontology-based information retrieval. We aim at incorporating the information contained in ontologies by choosing a representation formalism where queries and objects in the information base are described using a lattice-algebraic concept language containing expressions that can be directly mapped into the ontology. Similarity between the description of the query and descriptions of the objects is calculated based on a nearness principle derived from the structure and relations of the ontology. This measure is then used to perform ontology-based query expansion. By doing so, we can replace semantic matching from direct reasoning over the ontology with numerical similarity calculation by means of a general aggregation principle The choice of the proposed similarity measure is guided by a set of properties aimed at ensuring the measures accordance with a set of distinctive structural qualities derived from the ontology. We furthermore empirically evaluate the proposed similarity measure by comparing the similarity ratings for pairs of concepts produced by the proposed measure, with the mean similarity ratings produced by humans for the same pairs.",
"title": ""
},
{
"docid": "710e81da55d50271b55ac9a4f2d7f986",
"text": "Although prior research has examined how individual difference factors are related to relationship initiation and formation over the Internet (e.g., online dating sites, social networking sites), little research has examined how dispositional factors are related to other aspects of online dating. The present research therefore sought to examine the relationship between several dispositional factors, such as Big-Five personality traits, self-esteem, rejection sensitivity, and attachment styles, and the use of online dating sites and online dating behaviors. Rejection sensitivity was the only dispositional variable predictive of use of online dating sites whereby those higher in rejection sensitivity are more likely to use online dating sites than those lower in rejection sensitivity. We also found that those higher in rejection sensitivity, those lower in conscientiousness, and men indicated being more likely to engage in potentially risky behaviors related to meeting an online dating partner face-to-face. Further research is needed to further explore the relationships between these dispositional factors and online dating behaviors. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "bb314530c796fbec6679a4a0cc6cd105",
"text": "The undergraduate computer science curriculum is generally focused on skills and tools; most students are not exposed to much research in the field, and do not learn how to navigate the research literature. We describe how science fiction reviews were used as a gateway to research reviews. Students learn a little about current or recent research on a topic that stirs their imagination, and learn how to search for, read critically, and compare technical papers on a topic related their chosen science fiction book, movie, or TV show.",
"title": ""
},
{
"docid": "371dad2a860f7106f10fd1f204afd3f2",
"text": "Increased neuromuscular excitability with varying clinical and EMG features were also observed during KCl administration in both cases. The findings are discussed on the light of the membrane ionic gradients current theory.",
"title": ""
},
{
"docid": "eaeccd0d398e0985e293d680d2265528",
"text": "Deep networks have been successfully applied to visual tracking by learning a generic representation offline from numerous training images. However the offline training is time-consuming and the learned generic representation may be less discriminative for tracking specific objects. In this paper we present that, even without learning, simple convolutional networks can be powerful enough to develop a robust representation for visual tracking. In the first frame, we randomly extract a set of normalized patches from the target region as filters, which define a set of feature maps in the subsequent frames. These maps measure similarities between each filter and the useful local intensity patterns across the target, thereby encoding its local structural information. Furthermore, all the maps form together a global representation, which maintains the relative geometric positions of the local intensity patterns, and hence the inner geometric layout of the target is also well preserved. A simple and effective online strategy is adopted to update the representation, allowing it to robustly adapt to target appearance variations. Our convolution networks have surprisingly lightweight structure, yet perform favorably against several state-of-the-art methods on a large benchmark dataset with 50 challenging videos.",
"title": ""
},
{
"docid": "10e41955aea6710f198744ac1f201d64",
"text": "Current research on culture focuses on independence and interdependence and documents numerous East-West psychological differences, with an increasing emphasis placed on cognitive mediating mechanisms. Lost in this literature is a time-honored idea of culture as a collective process composed of cross-generationally transmitted values and associated behavioral patterns (i.e., practices). A new model of neuro-culture interaction proposed here addresses this conceptual gap by hypothesizing that the brain serves as a crucial site that accumulates effects of cultural experience, insofar as neural connectivity is likely modified through sustained engagement in cultural practices. Thus, culture is \"embrained,\" and moreover, this process requires no cognitive mediation. The model is supported in a review of empirical evidence regarding (a) collective-level factors involved in both production and adoption of cultural values and practices and (b) neural changes that result from engagement in cultural practices. Future directions of research on culture, mind, and the brain are discussed.",
"title": ""
},
{
"docid": "d5b986cf02b3f9b01e5307467c1faec2",
"text": "Most sentiment analysis approaches use as baseline a support vector machines (SVM) classifier with binary unigram weights. In this paper, we explore whether more sophisticated feature weighting schemes from Information Retrieval can enhance classification accuracy. We show that variants of the classictf.idf scheme adapted to sentiment analysis provide significant increases in accuracy, especially when using a sublinear function for term frequency weights and document frequency smoothing. The techniques are tested on a wide selection of data sets and produce the best accuracy to our knowledge.",
"title": ""
},
{
"docid": "d39843f342646e4d338ab92bb7391d76",
"text": "In this paper, a double-axis planar micro-fluxgate magnetic sensor and its front-end circuitry are presented. The ferromagnetic core material, i.e., the Vitrovac 6025 X, has been deposited on top of the coils with the dc-magnetron sputtering technique, which is a new type of procedure with respect to the existing solutions in the field of fluxgate sensors. This procedure allows us to obtain a core with the good magnetic properties of an amorphous ferromagnetic material, which is typical of a core with 25-mum thickness, but with a thickness of only 1 mum, which is typical of an electrodeposited core. The micro-Fluxgate has been realized in a 0.5- mum CMOS process using copper metal lines to realize the excitation coil and aluminum metal lines for the sensing coil, whereas the integrated interface circuitry for exciting and reading out the sensor has been realized in a 0.35-mum CMOS technology. Applying a triangular excitation current of 18 mA peak at 100 kHz, the magnetic sensitivity achieved is about 10 LSB/muT [using a 13-bit analog-to-digital converter (ADC)], which is suitable for detecting the Earth's magnetic field (plusmn60 muT), whereas the linearity error is 3% of the full scale. The maximum angle error of the sensor evaluating the Earth magnetic field is 2deg. The power consumption of the sensor is about 13.7 mW. The total power consumption of the system is about 90 mW.",
"title": ""
},
{
"docid": "7d0d68f2dd9e09540cb2ba71646c21d2",
"text": "INTRODUCTION: Back in time dentists used to place implants in locations with sufficient bone-dimensions only, with less regard to placement of final definitive restoration but most of the times, the placement of implant is not as accurate as intended and even a minor variation in comparison to ideal placement causes difficulties in fabrication of final prosthesis. The use of bone substitutes and membranes is now one of the standard therapeutic approaches. In order to accelerate healing of bone graft over the bony defect, numerous techniques utilizing platelet and fibrinogen concentrates have been introduced in the literature.. OBJECTIVES: This study was designed to evaluate the efficacy of using Autologous Concentrated Growth Factors (CGF) Enriched Bone Graft Matrix (Sticky Bone) and CGF-Enriched Fibrin Membrane in management of dehiscence defect around dental implant in narrow maxillary anterior ridge. MATERIALS AND METHODS: Eleven DIO implants were inserted in six adult patients presenting an upper alveolar ridge width of less than 4mm determined by cone beam computed tomogeraphy (CBCT). After implant placement, the resultant vertical labial dehiscence defect was augmented utilizing Sticky Bone and CGF-Enriched Fibrin Membrane. Three CBCTs were made, pre-operatively, immediately postoperatively and six-months post-operatively. The change in vertical defect size was calculated radiographically then statistically analyzed. RESULTS: Vertical dehiscence defect was sufficiently recovered in 5 implant-sites while in the other 6 sites it was decreased to mean value of 1.25 mm ± 0.69 SD, i.e the defect coverage in 6 implants occurred with mean value of 4.59 mm ±0.49 SD. Also the results of the present study showed that the mean of average implant stability was 59.89 mm ± 3.92 CONCLUSIONS: The combination of PRF mixed with CGF with bone graft (allograft) can increase the quality (density) of the newly formed bone and enhance the rate of new bone formation.",
"title": ""
},
{
"docid": "c7d23af5ad79d9863e83617cf8bbd1eb",
"text": "Insulin resistance has long been associated with obesity. More than 40 years ago, Randle and colleagues postulated that lipids impaired insulin-stimulated glucose use by muscles through inhibition of glycolysis at key points. However, work over the past two decades has shown that lipid-induced insulin resistance in skeletal muscle stems from defects in insulin-stimulated glucose transport activity. The steatotic liver is also resistant to insulin in terms of inhibition of hepatic glucose production and stimulation of glycogen synthesis. In muscle and liver, the intracellular accumulation of lipids-namely, diacylglycerol-triggers activation of novel protein kinases C with subsequent impairments in insulin signalling. This unifying hypothesis accounts for the mechanism of insulin resistance in obesity, type 2 diabetes, lipodystrophy, and ageing; and the insulin-sensitising effects of thiazolidinediones.",
"title": ""
},
{
"docid": "bb8b6d2424ef7709aa1b89bc5d119686",
"text": "We have applied a Long Short-Term Memory neural network to model S&P 500 volatility, incorporating Google domestic trends as indicators of the public mood and macroeconomic factors. In a held-out test set, our Long Short-Term Memory model gives a mean absolute percentage error of 24.2%, outperforming linear Ridge/Lasso and autoregressive GARCH benchmarks by at least 31%. This evaluation is based on an optimal observation and normalization scheme which maximizes the mutual information between domestic trends and daily volatility in the training set. Our preliminary investigation shows strong promise for better predicting stock behavior via deep learning and neural network models.",
"title": ""
},
{
"docid": "8e8dcbc4eacf7484a44b4b6647fcfdb2",
"text": "BACKGROUND\nWith the rapid accumulation of biological datasets, machine learning methods designed to automate data analysis are urgently needed. In recent years, so-called topic models that originated from the field of natural language processing have been receiving much attention in bioinformatics because of their interpretability. Our aim was to review the application and development of topic models for bioinformatics.\n\n\nDESCRIPTION\nThis paper starts with the description of a topic model, with a focus on the understanding of topic modeling. A general outline is provided on how to build an application in a topic model and how to develop a topic model. Meanwhile, the literature on application of topic models to biological data was searched and analyzed in depth. According to the types of models and the analogy between the concept of document-topic-word and a biological object (as well as the tasks of a topic model), we categorized the related studies and provided an outlook on the use of topic models for the development of bioinformatics applications.\n\n\nCONCLUSION\nTopic modeling is a useful method (in contrast to the traditional means of data reduction in bioinformatics) and enhances researchers' ability to interpret biological information. Nevertheless, due to the lack of topic models optimized for specific biological data, the studies on topic modeling in biological data still have a long and challenging road ahead. We believe that topic models are a promising method for various applications in bioinformatics research.",
"title": ""
},
{
"docid": "b5b6fc6ce7690ae8e49e1951b08172ce",
"text": "The output voltage derivative term associated with a PID controller injects significant noise in a dc-dc converter. This is mainly due to the parasitic resistance and inductance of the output capacitor. Particularly, during a large-signal transient, noise injection significantly degrades phase margin. Although noise characteristics can be improved by reducing the cutoff frequency of the low-pass filter associated with the voltage derivative, this degrades the closed-loop bandwidth. A formulation of a PID controller is introduced to replace the output voltage derivative with information about the capacitor current, thus reducing noise injection. It is shown that this formulation preserves the fundamental principle of a PID controller and incorporates a load current feedforward, as well as inductor current dynamics. This can be helpful to further improve bandwidth and phase margin. The proposed method is shown to be equivalent to a voltage-mode-controlled buck converter and a current-mode-controlled boost converter with a PID controller in the voltage feedback loop. A buck converter prototype is tested, and the proposed algorithm is implemented using a field-programmable gate array.",
"title": ""
},
{
"docid": "77985effa998d08e75eaa117e07fc7a9",
"text": "After two successful years of Event Nugget evaluation in the TAC KBP workshop, the third Event Nugget evaluation track for Knowledge Base Population(KBP) still attracts a lot of attention from the field. In addition to the traditional event nugget and coreference tasks, we introduce a new event sequencing task in English. The new task has brought more complex event relation reasoning to the current evaluations. In this paper we try to provide an overview on the task definition, data annotation, evaluation and trending research methods. We further discuss our efforts in creating the new event sequencing task and interesting research problems related to it.",
"title": ""
},
{
"docid": "2269c84a2725605242790cf493425e0c",
"text": "Tissue engineering aims to improve the function of diseased or damaged organs by creating biological substitutes. To fabricate a functional tissue, the engineered construct should mimic the physiological environment including its structural, topographical, and mechanical properties. Moreover, the construct should facilitate nutrients and oxygen diffusion as well as removal of metabolic waste during tissue regeneration. In the last decade, fiber-based techniques such as weaving, knitting, braiding, as well as electrospinning, and direct writing have emerged as promising platforms for making 3D tissue constructs that can address the abovementioned challenges. Here, we critically review the techniques used to form cell-free and cell-laden fibers and to assemble them into scaffolds. We compare their mechanical properties, morphological features and biological activity. We discuss current challenges and future opportunities of fiber-based tissue engineering (FBTE) for use in research and clinical practice.",
"title": ""
},
{
"docid": "93f2fb12d61f3acb2eb31f9a2335b9c3",
"text": "Cluster identification in large scale information network is a highly attractive issue in the network knowledge mining. Traditionally, community detection algorithms are designed to cluster object population based on minimizing the cutting edge number. Recently, researchers proposed the concept of higher-order clustering framework to segment network objects under the higher-order connectivity patterns. However, the essences of the numerous methodologies are focusing on mining the homogeneous networks to identify groups of objects which are closely related to each other, indicating that they ignore the heterogeneity of different types of objects and links in the networks. In this study, we propose an integrated framework of heterogeneous information network structure and higher-order clustering for mining the hidden relationship, which include three major steps: (1) Construct the heterogeneous network, (2) Convert HIN to Homogeneous network, and (3) Community detection.",
"title": ""
},
{
"docid": "226d474f5d0278f81bcaf7203706486b",
"text": "Human pose estimation is a well-known computer vision problem that receives intensive research interest. The reason for such interest is the wide range of applications that the successful estimation of human pose offers. Articulated pose estimation includes real time acquisition, analysis, processing and understanding of high dimensional visual information. Ensemble learning methods operating on hand-engineered features have been commonly used for addressing this task. Deep learning exploits representation learning methods to learn multiple levels of representations from raw input data, alleviating the need to hand-crafted features. Deep convolutional neural networks are achieving the state-of-the-art in visual object recognition, localization, detection. In this paper, the pose estimation task is formulated as an offset joint regression problem. The 3D joints positions are accurately detected from a single raw depth image using a deep convolutional neural networks model. The presented method relies on the utilization of the state-of-the-art data generation pipeline to generate large, realistic, and highly varied synthetic set of training images. Analysis and experimental results demonstrate the generalization performance and the real time successful application of the proposed method.",
"title": ""
},
{
"docid": "49d5f6fdc02c777d42830bac36f6e7e2",
"text": "Current tools for exploratory data analysis (EDA) require users to manually select data attributes, statistical computations and visual encodings. This can be daunting for large-scale, complex data. We introduce Foresight, a visualization recommender system that helps the user rapidly explore large high-dimensional datasets through “guideposts.” A guidepost is a visualization corresponding to a pronounced instance of a statistical descriptor of the underlying data, such as a strong linear correlation between two attributes, high skewness or concentration about the mean of a single attribute, or a strong clustering of values. For each descriptor, Foresight initially presents visualizations of the “strongest” instances, based on an appropriate ranking metric. Given these initial guideposts, the user can then look at “nearby” guideposts by issuing “guidepost queries” containing constraints on metric type, metric strength, data attributes, and data values. Thus, the user can directly explore the network of guideposts, rather than the overwhelming space of data attributes and visual encodings. Foresight also provides for each descriptor a global visualization of ranking-metric values to both help orient the user and ensure a thorough exploration process. Foresight facilitates interactive exploration of large datasets using fast, approximate sketching to compute ranking metrics. We also contribute insights on EDA practices of data scientists, summarizing results from an interview study we conducted to inform the design of Foresight.",
"title": ""
},
{
"docid": "b261534c045299c1c3a0e0cc37caa618",
"text": "Michelangelo (1475-1564) had a life-long interest in anatomy that began with his participation in public dissections in his early teens, when he joined the court of Lorenzo de' Medici and was exposed to its physician-philosopher members. By the age of 18, he began to perform his own dissections. His early anatomic interests were revived later in life when he aspired to publish a book on anatomy for artists and to collaborate in the illustration of a medical anatomy text that was being prepared by the Paduan anatomist Realdo Colombo (1516-1559). His relationship with Colombo likely began when Colombo diagnosed and treated him for nephrolithiasis in 1549. He seems to have developed gouty arthritis in 1555, making the possibility of uric acid stones a distinct probability. Recurrent urinary stones until the end of his life are well documented in his correspondence, and available documents imply that he may have suffered from nephrolithiasis earlier in life. His terminal illness with symptoms of fluid overload suggests that he may have sustained obstructive nephropathy. That this may account for his interest in kidney function is evident in his poetry and drawings. Most impressive in this regard is the mantle of the Creator in his painting of the Separation of Land and Water in the Sistine Ceiling, which is in the shape of a bisected right kidney. His use of the renal outline in a scene representing the separation of solids (Land) from liquid (Water) suggests that Michelangelo was likely familiar with the anatomy and function of the kidney as it was understood at the time.",
"title": ""
}
] |
scidocsrr
|
ef23a68c8a9a134db33900965e814f8d
|
Analysis of Permission-based Security in Android through Policy Expert, Developer, and End User Perspectives
|
[
{
"docid": "948d3835e90c530c4290e18f541d5ef2",
"text": "Each time a user installs an application on their Android phone they are presented with a full screen of information describing what access they will be granting that application. This information is intended to help them make two choices: whether or not they trust that the application will not damage the security of their device and whether or not they are willing to share their information with the application, developer, and partners in question. We performed a series of semi-structured interviews in two cities to determine whether people read and understand these permissions screens, and to better understand how people perceive the implications of these decisions. We find that the permissions displays are generally viewed and read, but not understood by Android users. Alarmingly, we find that people are unaware of the security risks associated with mobile apps and believe that app marketplaces test and reject applications. In sum, users are not currently well prepared to make informed privacy and security decisions around installing applications.",
"title": ""
}
] |
[
{
"docid": "8e10d20723be23d699c0c581c529ee19",
"text": "Insect-scale legged robots have the potential to locomote on rough terrain, crawl through confined spaces, and scale vertical and inverted surfaces. However, small scale implies that such robots are unable to carry large payloads. Limited payload capacity forces miniature robots to utilize simple control methods that can be implemented on a simple onboard microprocessor. In this study, the design of a new version of the biologically-inspired Harvard Ambulatory MicroRobot (HAMR) is presented. In order to find the most suitable control inputs for HAMR, maneuverability experiments are conducted for several drive parameters. Ideal input candidates for orientation and lateral velocity control are identified as a result of the maneuverability experiments. Using these control inputs, two simple feedback controllers are implemented to control the orientation and the lateral velocity of the robot. The controllers are used to force the robot to track trajectories with a minimum turning radius of 55 mm and a maximum lateral to normal velocity ratio of 0.8. Due to their simplicity, the controllers presented in this work are ideal for implementation with on-board computation for future HAMR prototypes.",
"title": ""
},
{
"docid": "12b855b39278c49d448fbda9aa56cacf",
"text": "Human visual system (HVS) can perceive constant color under varying illumination conditions while digital images record information of both reflectance (physical color) of objects and illumination. Retinex theory, formulated by Edwin H. Land, aimed to simulate and explain this feature of HVS. However, to recover the reflectance from a given image is in general an ill-posed problem. In this paper, we establish an L1-based variational model for Retinex theory that can be solved by a fast computational approach based on Bregman iteration. Compared with previous works, our L1-Retinex method is more accurate for recovering the reflectance, which is illustrated by examples and statistics. In medical images such as magnetic resonance imaging (MRI), intensity inhomogeneity is often encountered due to bias fields. This is a similar formulation to Retinex theory while the MRI has some specific properties. We then modify the L1-Retinex method and develop a new algorithm for MRI data. We demonstrate the performance of our method by comparison with previous work on simulated and real data.",
"title": ""
},
{
"docid": "6a9738cbe28b53b3a9ef179091f05a4a",
"text": "The study examined the impact of advertising on building brand equity in Zimbabwe’s Tobacco Auction floors. In this study, 100 farmers were selected from 88 244 farmers registered in the four tobacco growing regions of country. A structured questionnaire was used as a tool to collect primary data. A pilot survey with 20 participants was initially conducted to test the reliability of the questionnaire. Results of the pilot study were analysed to test for reliability using SPSS.Results of the study found that advertising affects brand awareness, brand loyalty, brand association and perceived quality. 55% of the respondents agreed that advertising changed their perceived quality on auction floors. A linear regression analysis was performed to predict brand quality as a function of the type of farmer, source of information, competitive average pricing, loyalty, input assistance, service delivery, number of floors, advert mode, customer service, floor reputation and attitude. There was a strong relationship between brand quality and the independent variables as depicted by the regression coefficient of 0.885 and the model fit is perfect at 78.3%. From the ANOVA tables, a good fit was established between advertising and brand equity with p=0.001 which is less than the significance level of 0.05. While previous researches concentrated on the elements of brand equity as suggested by Keller’s brand equity model, this research has managed to extend the body of knowledge on brand equity by exploring the role of advertising. Future research should assess the relationship between advertising and a brand association.",
"title": ""
},
{
"docid": "2c9e17d4c5bfb803ea1ff20ea85fbd10",
"text": "In this paper, we present a new and significant theoretical discovery. If the absolute height difference between base station (BS) antenna and user equipment (UE) antenna is larger than zero, then the network capacity performance in terms of the area spectral efficiency (ASE) will continuously decrease as the BS density increases for ultra-dense (UD) small cell networks (SCNs). This performance behavior has a tremendous impact on the deployment of UD SCNs in the 5th- generation (5G) era. Network operators may invest large amounts of money in deploying more network infrastructure to only obtain an even worse network performance. Our study results reveal that it is a must to lower the SCN BS antenna height to the UE antenna height to fully achieve the capacity gains of UD SCNs in 5G. However, this requires a revolutionized approach of BS architecture and deployment, which is explored in this paper too.",
"title": ""
},
{
"docid": "755f7d663e813d7450089fc0d7058037",
"text": "This paper presents a new approach for learning in structured domains (SDs) using a constructive neural network for graphs (NN4G). The new model allows the extension of the input domain for supervised neural networks to a general class of graphs including both acyclic/cyclic, directed/undirected labeled graphs. In particular, the model can realize adaptive contextual transductions, learning the mapping from graphs for both classification and regression tasks. In contrast to previous neural networks for structures that had a recursive dynamics, NN4G is based on a constructive feedforward architecture with state variables that uses neurons with no feedback connections. The neurons are applied to the input graphs by a general traversal process that relaxes the constraints of previous approaches derived by the causality assumption over hierarchical input data. Moreover, the incremental approach eliminates the need to introduce cyclic dependencies in the definition of the system state variables. In the traversal process, the NN4G units exploit (local) contextual information of the graphs vertices. In spite of the simplicity of the approach, we show that, through the compositionality of the contextual information developed by the learning, the model can deal with contextual information that is incrementally extended according to the graphs topology. The effectiveness and the generality of the new approach are investigated by analyzing its theoretical properties and providing experimental results.",
"title": ""
},
{
"docid": "de39f498f28cf8cfc01f851ca3582d32",
"text": "Program autotuning has been shown to achieve better or more portable performance in a number of domains. However, autotuners themselves are rarely portable between projects, for a number of reasons: using a domain-informed search space representation is critical to achieving good results; search spaces can be intractably large and require advanced machine learning techniques; and the landscape of search spaces can vary greatly between different problems, sometimes requiring domain specific search techniques to explore efficiently.\n This paper introduces OpenTuner, a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests. We demonstrate the efficacy and generality of OpenTuner by building autotuners for 7 distinct projects and 16 total benchmarks, showing speedups over prior techniques of these projects of up to 2.8x with little programmer effort.",
"title": ""
},
{
"docid": "96fd2fdc0ea4fbde407ac1f56452ca24",
"text": "EEG signals, measuring transient brain activities, can be used as a source of biometric information with potential application in high-security person recognition scenarios. However, due to the inherent nature of these signals and the process used for their acquisition, their effective preprocessing is critical for their successful utilisation. In this paper we compare the effectiveness of different wavelet-based noise removal methods and propose an EEG-based biometric identification system which combines two such de-noising methods to enhance the signal preprocessing stage. In tests using 50 subjects from a public database, the proposed new approach is shown to provide improved identification performance over alternative techniques. Another important preprocessing consideration is the segmentation of the EEG record prior to de-noising. Different segmentation approaches were investigated and the trade-off between performance and computation time is explored. Finally the paper reports on the impact of the choice of wavelet function used for feature extraction on system performance.",
"title": ""
},
{
"docid": "4e86e02be77fe4e10c199efa1e9456c4",
"text": "This paper presents EsdRank, a new technique for improving ranking using external semi-structured data such as controlled vocabularies and knowledge bases. EsdRank treats vocabularies, terms and entities from external data, as objects connecting query and documents. Evidence used to link query to objects, and to rank documents are incorporated as features between query-object and object-document correspondingly. A latent listwise learning to rank algorithm, Latent-ListMLE, models the objects as latent space between query and documents, and learns how to handle all evidence in a unified procedure from document relevance judgments. EsdRank is tested in two scenarios: Using a knowledge base for web search, and using a controlled vocabulary for medical search. Experiments on TREC Web Track and OHSUMED data show significant improvements over state-of-the-art baselines.",
"title": ""
},
{
"docid": "5d05addd1cac2ea4ca5008950a21bd06",
"text": "We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization. Our method iteratively transports a set of particles to match the target distribution, by applying a form of functional gradient descent that minimizes the KL divergence. Empirical studies are performed on various real world models and datasets, on which our method is competitive with existing state-of-the-art methods. The derivation of our method is based on a new theoretical result that connects the derivative of KL divergence under smooth transforms with Stein’s identity and a recently proposed kernelized Stein discrepancy, which is of independent interest.",
"title": ""
},
{
"docid": "935c404529b02cee2620e52f7a09b84d",
"text": "We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills/policies that solve a corresponding distribution of parameterized tasks/goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: 1) learning the inverse kinematics in a highly-redundant robotic arm, 2) learning omnidirectional locomotion with motor primitives in a quadruped robot, 3) an arm learning to control a fishing rod with a flexible wire. We show that 1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; 2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; 3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.",
"title": ""
},
{
"docid": "c01dd2ae90781291cb5915957bd42ae1",
"text": "Mobile devices have become an important part of our everyday life, harvesting more and more confidential user information. Their portable nature and the great exposure to security attacks, however, call out for stronger authentication mechanisms than simple password-based identification. Biometric authentication techniques have shown potential in this context. Unfortunately, prior approaches are either excessively prone to forgery or have too low accuracy to foster widespread adoption. In this paper, we propose sensor-enhanced keystroke dynamics, a new biometric mechanism to authenticate users typing on mobile devices. The key idea is to characterize the typing behavior of the user via unique sensor features and rely on standard machine learning techniques to perform user authentication. To demonstrate the effectiveness of our approach, we implemented an Android prototype system termed Unagi. Our implementation supports several feature extraction and detection algorithms for evaluation and comparison purposes. Experimental results demonstrate that sensor-enhanced keystroke dynamics can improve the accuracy of recent gestured-based authentication mechanisms (i.e., EER>0.5%) by one order of magnitude, and the accuracy of traditional keystroke dynamics (i.e., EER>7%) by two orders of magnitude.",
"title": ""
},
{
"docid": "4a70c88a031195a5593aaa403b9681cd",
"text": "In this paper, we are interested in two seemingly different concepts: adversarial training and generative adversarial networks (GANs). Particularly, how these techniques help to improve each other. To this end, we analyze the limitation of adversarial training as the defense method, starting from questioning how well the robustness of a model can generalize. Then, we successfully improve the generalizability via data augmentation by the “fake” images sampled from generative adversarial network. After that, we are surprised to see that the resulting robust classifier leads to a better generator, for free. We intuitively explain this interesting phenomenon and leave the theoretical analysis for future work. Motivated by these observations, we propose a system that combines generator, discriminator, and adversarial attacker in a single network. After end-to-end training and fine tuning, our method can simultaneously improve the robustness of classifiers, measured by accuracy under strong adversarial attacks, and the quality of generators, evaluated both aesthetically and quantitatively. In terms of the classifier, we achieve better robustness than the state-of-the-art adversarial training algorithm proposed in (Madry etla., 2017), while our generator achieves competitive performance compared with SN-GAN (Miyato and Koyama, 2018). Source code is publicly available online at https://github.com/anonymous.",
"title": ""
},
{
"docid": "813106ce10d23483ef8afa56857277a2",
"text": "Reinforced concrete walls are commonly used as the primary lateral force-resisting system for tall buildings. As the tools for conducting nonlinear response history analysis have improved and with the advent of performancebased seismic design, reinforced concrete walls and core walls are often employed as the only lateral forceresisting system. Proper modelling of the load versus deformation behaviour of reinforced concrete walls and link beams is essential to accurately predict important response quantities. Given this critical need, an overview of modelling approaches appropriate to capture the lateral load responses of both slender and stout reinforced concrete walls, as well as link beams, is presented. Modelling of both fl exural and shear responses is addressed, as well as the potential impact of coupled fl exure–shear behaviour. Model results are compared with experimental results to assess the ability of common modelling approaches to accurately predict both global and local experimental responses. Based on the fi ndings, specifi c recommendations are made for general modelling issues, limiting material strains for combined bending and axial load, and shear backbone relations. Copyright © 2007 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "2dee247b24afc7ddba44b312c0832bc1",
"text": "During crowded events, cellular networks face voice and data traffic volumes that are often orders of magnitude higher than what they face during routine days. Despite the use of portable base stations for temporarily increasing communication capacity and free Wi-Fi access points for offloading Internet traffic from cellular base stations, crowded events still present significant challenges for cellular network operators looking to reduce dropped call events and improve Internet speeds. For an effective cellular network design, management, and optimization, it is crucial to understand how cellular network performance degrades during crowded events, what causes this degradation, and how practical mitigation schemes would perform in real-life crowded events. This paper makes a first step toward this end by characterizing the operational performance of a tier-1 cellular network in the U.S. during two high-profile crowded events in 2012. We illustrate how the changes in population distribution, user behavior, and application workload during crowded events result in significant voice and data performance degradation, including more than two orders of magnitude increase in connection failures. Our findings suggest two mechanisms that can improve performance without resorting to costly infrastructure changes: radio resource allocation tuning and opportunistic connection sharing. Using trace-driven simulations, we show that more aggressive release of radio resources via 1-2 s shorter radio resource control timeouts as compared with routine days helps to achieve better tradeoff between wasted radio resources, energy consumption, and delay during crowded events, and opportunistic connection sharing can reduce connection failures by 95% when employed by a small number of devices in each cell sector.",
"title": ""
},
{
"docid": "49517920ddecf10a384dc3e98e39459b",
"text": "Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we address this question by leveraging recent techniques that transfer adversarial examples from computer vision models with known parameters and architecture to other models with unknown parameters and architecture, and by matching the initial processing of the human visual system. We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers.",
"title": ""
},
{
"docid": "cc2579bb621338908cacc7808cb1f851",
"text": "This paper presents a comprehensive analysis and comparison of air-cored axial-flux permanent-magnet machines with different types of coil configurations. Although coil factor is particularly more sensitive to coil-band width and coil pitch in air-cored machines than conventional slotted machines, remarkably no comprehensive analytical equations exist. Here, new formulas are derived to compare the coil factor of two common concentrated-coil stator winding types. Then, respective coil factors for the winding types are used to determine the torque characteristics and, from that, the optimized coil configurations. Three-dimensional finite-element analysis (FEA) models are built to verify the analytical models. Furthermore, overlapping and wave windings are investigated and compared with the concentrated-coil types. Finally, a prototype machine is designed and built for experimental validations. The results show that the concentrated-coil type with constant coil pitch is superior to all other coil types under study.",
"title": ""
},
{
"docid": "3902afc560de6f0b028315977bc55976",
"text": "Traffic light congestion normally occurs in urban areas where the number of vehicles is too many on the road. This problem drives the need for innovation and provide efficient solutions regardless this problem. Smart system that will monitor the congestion level at the traffic light will be a new option to replace the old system which is not practical anymore. Implementing internet of thinking (IoT) technology will provide the full advantage for monitoring and creating a congestion model based on sensor readings. Multiple sensor placements for each lane will give a huge advantage in detecting vehicle and increasing the accuracy in collecting data. To gather data from each sensor, the LoRaWAN technology is utilized where it features low power wide area network, low cost of implementation and the communication is secure bi-directional for the internet of thinking. The radio frequency used between end nodes to gateways range is estimated around 15-kilometer radius. A series of test is carried out to estimate the range of signal and it gives a positive result. The level of congestion for each lane will be displayed on Grafana dashboard and the algorithm can be calculated. This provides huge advantages to the implementation of this project, especially the scope of the project will be focus in urban areas where the level of congestion is bad.",
"title": ""
},
{
"docid": "6831c633bf7359b8d22296b52a9a60b8",
"text": "The paper presents a system, Heart Track, which aims for automated ECG (Electrocardiogram) analysis. Different modules and algorithms which are proposed and used for implementing the system are discussed. The ECG is the recording of the electrical activity of the heart and represents the depolarization and repolarization of the heart muscle cells and the heart chambers. The electrical signals from the heart are measured non-invasively using skin electrodes and appropriate electronic measuring equipment. ECG is measured using 12 leads which are placed at specific positions on the body [2]. The required data is converted into ECG curve which possesses a characteristic pattern. Deflections from this normal ECG pattern can be used as a diagnostic tool in medicine in the detection of cardiac diseases. Diagnosis of large number of cardiac disorders can be predicted from the ECG waves wherein each component of the ECG wave is associated with one or the other disorder. This paper concentrates entirely on detection of Myocardial Infarction, hence only the related components (ST segment) of the ECG wave are analyzed.",
"title": ""
},
{
"docid": "64c06bffe4aeff54fbae9d87370e552c",
"text": "Social networking sites occupy increasing fields of daily life and act as important communication channels today. But recent research also discusses the dark side of these sites, which expresses in form of stress, envy, addiction or even depression. Nevertheless, there must be a reason why people use social networking sites, even though they face related risks. One reason is human curiosity that tempts users to behave like this. The research on hand presents the impact of curiosity on user acceptance of social networking sites, which is theorized and empirically evaluated by using the technology acceptance model and a quantitative study among Facebook users. It further reveals that especially two types of human curiosity, epistemic and interpersonal curiosity, influence perceived usefulness and perceived enjoyment, and with it technology acceptance.",
"title": ""
},
{
"docid": "a64ae2e6e72b9e38c700ddd62b4f6bf3",
"text": "Cerebral gray-matter volume (GMV) decreases in normal aging but the extent of the decrease may be experience-dependent. Bilingualism may be one protective factor and in this article we examine its potential protective effect on GMV in a region that shows strong age-related decreases-the left anterior temporal pole. This region is held to function as a conceptual hub and might be expected to be a target of plastic changes in bilingual speakers because of the requirement for these speakers to store and differentiate lexical concepts in 2 languages to guide speech production and comprehension processes. In a whole brain comparison of bilingual speakers (n = 23) and monolingual speakers (n = 23), regressing out confounding factors, we find more extensive age-related decreases in GMV in the monolingual brain and significantly increased GMV in left temporal pole for bilingual speakers. Consistent with a specific neuroprotective effect of bilingualism, region of interest analyses showed a significant positive correlation between naming performance in the second language and GMV in this region. The effect appears to be bilateral though because there was a nonsignificantly different effect of naming performance on GMV in the right temporal pole. Our data emphasize the vulnerability of the temporal pole to normal aging and the value of bilingualism as both a general and specific protective factor to GMV decreases in healthy aging.",
"title": ""
}
] |
scidocsrr
|
4b78e80f2a680dcde17697d86ec3ba2e
|
Robust discrete optimization and network flows
|
[
{
"docid": "d53726710ce73fbcf903a1537f149419",
"text": "We treat in this paper Linear Programming (LP) problems with uncertain data. The focus is on uncertainty associated with hard constraints: those which must be satisfied, whatever is the actual realization of the data (within a prescribed uncertainty set). We suggest a modeling methodology whereas an uncertain LP is replaced by its Robust Counterpart (RC). We then develop the analytical and computational optimization tools to obtain robust solutions of an uncertain LP problem via solving the corresponding explicitly stated convex RC program. In particular, it is shown that the RC of an LP with ellipsoidal uncertainty set is computationally tractable, since it leads to a conic quadratic program, which can be solved in polynomial time.",
"title": ""
},
{
"docid": "4a3f7e89874c76f62aa97ef6a114d574",
"text": "A robust approach to solving linear optimization problems with uncertain data was proposed in the early 1970s and has recently been extensively studied and extended. Under this approach, we are willing to accept a suboptimal solution for the nominal values of the data in order to ensure that the solution remains feasible and near optimal when the data changes. A concern with such an approach is that it might be too conservative. In this paper, we propose an approach that attempts to make this trade-off more attractive; that is, we investigate ways to decrease what we call the price of robustness. In particular, we flexibly adjust the level of conservatism of the robust solutions in terms of probabilistic bounds of constraint violations. An attractive aspect of our method is that the new robust formulation is also a linear optimization problem. Thus we naturally extend our methods to discrete optimization problems in a tractable way. We report numerical results for a portfolio optimization problem, a knapsack problem, and a problem from the Net Lib library.",
"title": ""
}
] |
[
{
"docid": "a33ed384b8f4a86e8cc82970c7074bad",
"text": "There appear to be no brain imaging studies investigating which brain mechanisms subserve affective, impulsive violence versus planned, predatory violence. It was hypothesized that affectively violent offenders would have lower prefrontal activity, higher subcortical activity, and reduced prefrontal/subcortical ratios relative to controls, while predatory violent offenders would show relatively normal brain functioning. Glucose metabolism was assessed using positron emission tomography in 41 comparisons, 15 predatory murderers, and nine affective murderers in left and right hemisphere prefrontal (medial and lateral) and subcortical (amygdala, midbrain, hippocampus, and thalamus) regions. Affective murderers relative to comparisons had lower left and right prefrontal functioning, higher right hemisphere subcortical functioning, and lower right hemisphere prefrontal/subcortical ratios. In contrast, predatory murderers had prefrontal functioning that was more equivalent to comparisons, while also having excessively high right subcortical activity. Results support the hypothesis that emotional, unplanned impulsive murderers are less able to regulate and control aggressive impulses generated from subcortical structures due to deficient prefrontal regulation. It is hypothesized that excessive subcortical activity predisposes to aggressive behaviour, but that while predatory murderers have sufficiently good prefrontal functioning to regulate these aggressive impulses, the affective murderers lack such prefrontal control over emotion regulation.",
"title": ""
},
{
"docid": "43e39433013ca845703af053e5ef9e11",
"text": "This paper presents the proposed design of high power and high efficiency inverter for wireless power transfer systems operating at 13.56 MHz using multiphase resonant inverter and GaN HEMT devices. The high efficiency and the stable of inverter are the main targets of the design. The module design, the power loss analysis and the drive circuit design have been addressed. In experiment, a 3 kW inverter with the efficiency of 96.1% is achieved that significantly improves the efficiency of 13.56 MHz inverter. In near future, a 10 kW inverter with the efficiency of over 95% can be realizable by following this design concept.",
"title": ""
},
{
"docid": "c26ff98ac6cc027b07fec213a192a446",
"text": "Basic to all motile life is a differential approach/avoid response to perceived features of environment. The stages of response are initial reflexive noticing and orienting to the stimulus, preparation, and execution of response. Preparation involves a coordination of many aspects of the organism: muscle tone, posture, breathing, autonomic functions, motivational/emotional state, attentional orientation, and expectations. The organism organizes itself in relation to the challenge. We propose to call this the \"preparatory set\" (PS). We suggest that the concept of the PS can offer a more nuanced and flexible perspective on the stress response than do current theories. We also hypothesize that the mechanisms of body-mind therapeutic and educational systems (BTES) can be understood through the PS framework. We suggest that the BTES, including meditative movement, meditation, somatic education, and the body-oriented psychotherapies, are approaches that use interventions on the PS to remedy stress and trauma. We discuss how the PS can be adaptive or maladaptive, how BTES interventions may restore adaptive PS, and how these concepts offer a broader and more flexible view of the phenomena of stress and trauma. We offer supportive evidence for our hypotheses, and suggest directions for future research. We believe that the PS framework will point to ways of improving the management of stress and trauma, and that it will suggest directions of research into the mechanisms of action of BTES.",
"title": ""
},
{
"docid": "1d348cc6b7a98dc5b61c66af9e94153c",
"text": "With the rapid development of web technology, an increasing number of enterprises having been seeking for a method to facilitate business decision making process, power the bottom line, and achieve a fully coordinated organization, called business intelligence (BI). Unfortunately, traditional BI tends to be unmanageable, risky and prohibitively expensive, especially for Small and Medium Enterprises (SMEs). The emergence of cloud computing and Software as a Service (SaaS) provides a cost effective solution. Recently, business intelligence applications delivered via SaaS, termed as Business Intelligence as a Service (SaaS BI), has proved to be the next generation in BI market. However, since SaaS BI just comes in its infant stage, a general framework maybe poorly considered. Therefore, in this paper we proposed a general conceptual framework for SaaS BI, and presented several possible future directions of SaaS BI.",
"title": ""
},
{
"docid": "89b92204d76120bef660eb55303752d2",
"text": "In this paper, we present a design of RTE template structure for AUTOSAR-based vehicle applications. Due to an increase in software complexity in recent years, much greater efforts are necessary to manage and develop software modules in automotive industries. To deal with this issue, an automotive Open system architecture (AUTOSAR) partnership was launched. This embedded platform standardizes software architectures and provides a methodology supporting distributed process. The RTE that is located at the heart of AUTOSAR implements the virtual function bus functionality for a particle electronic control unit (ECU). It enables to communicate between application software components. The purpose of this paper is to design a RTE structure with the AUTOSAR standard concept. As a future work, this research will be further extended, and we will propose the development of a RTE generator that is drawn on an AUTOSAR RTE template structure.",
"title": ""
},
{
"docid": "51f2ba8b460be1c9902fb265b2632232",
"text": "Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice.",
"title": ""
},
{
"docid": "0f56b99bc1d2c9452786c05242c89150",
"text": "Individuals with below-knee amputation have more difficulty balancing during walking, yet few studies have explored balance enhancement through active prosthesis control. We previously used a dynamical model to show that prosthetic ankle push-off work affects both sagittal and frontal plane dynamics, and that appropriate step-by-step control of push-off work can improve stability. We hypothesized that this approach could be applied to a robotic prosthesis to partially fulfill the active balance requirements of human walking, thereby reducing balance-related activity and associated effort for the person using the device. We conducted experiments on human participants (N = 10) with simulated amputation. Prosthetic ankle push-off work was varied on each step in ways expected to either stabilize, destabilize or have no effect on balance. Average ankle push-off work, known to affect effort, was kept constant across conditions. Stabilizing controllers commanded more push-off work on steps when the mediolateral velocity of the center of mass was lower than usual at the moment of contralateral heel strike. Destabilizing controllers enforced the opposite relationship, while a neutral controller maintained constant push-off work regardless of body state. A random disturbance to landing foot angle and a cognitive distraction task were applied, further challenging participants’ balance. We measured metabolic rate, foot placement kinematics, center of pressure kinematics, distraction task performance, and user preference in each condition. We expected the stabilizing controller to reduce active control of balance and balance-related effort for the user, improving user preference. The best stabilizing controller lowered metabolic rate by 5.5% (p = 0.003) and 8.5% (p = 0.02), and step width variability by 10.0% (p = 0.009) and 10.7% (p = 0.03) compared to conditions with no control and destabilizing control, respectively. Participants tended to prefer stabilizing controllers. These effects were not due to differences in average push-off work, which was unchanged across conditions, or to average gait mechanics, which were also unchanged. Instead, benefits were derived from step-by-step adjustments to prosthesis behavior in response to variations in mediolateral velocity at heel strike. Once-per-step control of prosthetic ankle push-off work can reduce both active control of foot placement and balance-related metabolic energy use during walking.",
"title": ""
},
{
"docid": "3e24de04f0b1892b27fc60bb8a405d0d",
"text": "A power factor (PF) corrected single stage, two-switch isolated zeta converter is proposed for arc welding. This modified zeta converter is having two switches and two clamping diodes on the primary side of a high-frequency transformer. This, in turn, results in reduced switch stress. The proposed converter is designed to operate in a discontinuous inductor current mode (DICM) to achieve inherent PF correction at the utility. The DICM operation substantially reduces the complexity of the control and effectively regulates the output dc voltage. The proposed converter offers several features, such as inherent overload current limit and fast parametrical response, to the load and source voltage conditions. This, in turn, results in an improved performance in terms of power quality indices and an enhanced weld bead quality. The proposed modified zeta converter is designed and its performance is simulated in the MATLAB/Simulink environment. Simulated results are also verified experimentally on a developed prototype of the converter. The performance of the system is investigated in terms of its input PF, displacement PF, total harmonic distortion of ac mains current, voltage regulation, and robustness to prove its efficacy in overall performance.",
"title": ""
},
{
"docid": "3eb419ef59ad59e60bf357cfb2e69fba",
"text": "Heterogeneous information network (HIN) has been widely adopted in recommender systems due to its excellence in modeling complex context information. Although existing HIN based recommendation methods have achieved performance improvement to some extent, they have two major shortcomings. First, these models seldom learn an explicit representation for path or meta-path in the recommendation task. Second, they do not consider the mutual effect between the meta-path and the involved user-item pair in an interaction. To address these issues, we develop a novel deep neural network with the co-attention mechanism for leveraging rich meta-path based context for top-N recommendation. We elaborately design a three-way neural interaction model by explicitly incorporating meta-path based context. To construct the meta-path based context, we propose to use a priority based sampling technique to select high-quality path instances. Our model is able to learn effective representations for users, items and meta-path based context for implementing a powerful interaction function. The co-attention mechanism improves the representations for meta-path based con- text, users and items in a mutual enhancement way. Extensive experiments on three real-world datasets have demonstrated the effectiveness of the proposed model. In particular, the proposed model performs well in the cold-start scenario and has potentially good interpretability for the recommendation results.",
"title": ""
},
{
"docid": "a219afda822413bbed34a21145807b47",
"text": "In this work, the author implemented a NOVEL technique of multiple input multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) based on space frequency block coding (SF-BC). Where, the implemented code is designed based on the QOC using the techniques of the reconfigurable antennas. The proposed system is implemented using MATLAB program, and the results showing best performance of a wireless communications system of higher coding gain and diversity.",
"title": ""
},
{
"docid": "c67fbc6e0a2a66e0855dcfc7a70cfb86",
"text": "We present an optimistic primary-backup (so-called passive replication) mechanism for highly available Internet services on intercloud platforms. Our proposed method aims at providing Internet services despite the occurrence of a large-scale disaster. To this end, each service in our method creates replicas in different data centers and coordinates them with an optimistic consensus algorithm instead of a majority-based consensus algorithm such as Paxos. Although our method allows temporary inconsistencies among replicas, it eventually converges on the desired state without an interruption in services. In particular, the method tolerates simultaneous failure of the majority of nodes and a partitioning of the network. Moreover, through interservice communications, members of the service groups are autonomously reorganized according to the type of failure and changes in system load. This enables both load balancing and power savings, as well as provisioning for the next disaster. We demonstrate the service availability provided by our approach for simulated failure patterns and its adaptation to changes in workload for load balancing and power savings by experiments with a prototype implementation.",
"title": ""
},
{
"docid": "5eeb17964742e1bf1e517afcb1963b02",
"text": "Global navigation satellite system reflectometry is a multistatic radar using navigation signals as signals of opportunity. It provides wide-swath and improved spatiotemporal sampling over current space-borne missions. The lack of experimental datasets from space covering signals from multiple constellations (GPS, GLONASS, Galileo, and Beidou) at dual-band (L1 and L2) and dual-polarization (right- and left-hand circular polarization), over the ocean, land, and cryosphere remains a bottleneck to further develop these techniques. 3Cat-2 is a 6-unit (3 × 2 elementary blocks of 10 × 10 × 10 cm3) CubeSat mission designed and implemented at the Universitat Politècnica de Catalunya-BarcelonaTech to explore fundamental issues toward an improvement in the understanding of the bistatic scattering properties of different targets. Since geolocalization of the specific reflection points is determined by the geometry only, a moderate pointing accuracy is only required to correct the antenna pattern in scatterometry measurements. This paper describes the mission analysis and the current status of the assembly, integration, and verification activities of both the engineering model and the flight model performed at Universitat Politècnica de Catalunya NanoSatLab premises. 3Cat-2 launch is foreseen for the second quarter of 2016 into a Sun-Synchronous orbit of 510-km height.",
"title": ""
},
{
"docid": "3980da6e0c81bf029bbada09d7ea59e3",
"text": "We study RF-enabled wireless energy transfer (WET) via energy beamforming, from a multi-antenna energy transmitter (ET) to multiple energy receivers (ERs) in a backscatter communication system such as RFID. The acquisition of the forward-channel (i.e., ET-to-ER) state information (F-CSI) at the ET (or RFID reader) is challenging, since the ERs (or RFID tags) are typically too energy-and-hardware-constrained to estimate or feedback the F-CSI. The ET leverages its observed backscatter signals to estimate the backscatter-channel (i.e., ET-to-ER-to-ET) state information (BS-CSI) directly. We first analyze the harvested energy obtained using the estimated BS-CSI. Furthermore, we optimize the resource allocation to maximize the total utility of harvested energy. For WET to single ER, we obtain the optimal channel-training energy in a semiclosed form. For WET to multiple ERs, we optimize the channel-training energy and the energy allocation weights for different energy beams. For the straightforward weighted-sum-energy (WSE) maximization, the optimal WET scheme is shown to use only one energy beam, which leads to unfairness among ERs and motivates us to consider the complicated proportional-fair-energy (PFE) maximization. For PFE maximization, we show that it is a biconvex problem, and propose a block-coordinate-descent-based algorithm to find the close-to-optimal solution. Numerical results show that with the optimized solutions, the harvested energy suffers slight reduction of less than 10%, compared to that obtained using the perfect F-CSI.",
"title": ""
},
{
"docid": "122bc83bcd27b95092c64cf1ad8ee6a8",
"text": "Plants make the world, a greener and a better place to live in. Although all plants need water to survive, giving them too much or too little can cause them to die. Thus, we need to implement an automatic plant watering system that ensures that the plants are watered at regular intervals, with appropriate amount, whenever they are in need. This paper describes the object oriented design of an IoT based Automated Plant Watering System.",
"title": ""
},
{
"docid": "6f8a2292749aaae6add667d153e3abbd",
"text": "Despite a flurry of activities aimed at serving customers better, few companies have systematically revamped their operations with customer loyalty in mind. Instead, most have adopted improvement programs ad hoc, and paybacks haven't materialized. Building a highly loyal customer base must be integral to a company's basic business strategy. Loyalty leaders like MBNA credit cards are successful because they have designed their entire business systems around customer loyalty--a self-reinforcing system in which the company delivers superior value consistently and reinvents cash flows to find and keep high-quality customers and employees. The economic benefits of high customer loyalty are measurable. When a company consistently delivers superior value and wins customer loyalty, market share and revenues go up, and the cost of acquiring new customers goes down. The better economics mean the company can pay workers better, which sets off a whole chain of events. Increased pay boosts employee moral and commitment; as employees stay longer, their productivity goes up and training costs fall; employees' overall job satisfaction, combined with their experience, helps them serve customers better; and customers are then more inclined to stay loyal to the company. Finally, as the best customers and employees become part of the loyalty-based system, competitors are left to survive with less desirable customers and less talented employees. To compete on loyalty, a company must understand the relationships between customer retention and the other parts of the business--and be able to quantify the linkages between loyalty and profits. It involves rethinking and aligning four important aspects of the business: customers, product/service offering, employees, and measurement systems.",
"title": ""
},
{
"docid": "95db9ce9faaf13e8ff8d5888a6737683",
"text": "Measurements of pH, acidity, and alkalinity are commonly used to describe water quality. The three variables are interrelated and can sometimes be confused. The pH of water is an intensity factor, while the acidity and alkalinity of water are capacity factors. More precisely, acidity and alkalinity are defined as a water’s capacity to neutralize strong bases or acids, respectively. The term “acidic” for pH values below 7 does not imply that the water has no alkalinity; likewise, the term “alkaline” for pH values above 7 does not imply that the water has no acidity. Water with a pH value between 4.5 and 8.3 has both total acidity and total alkalinity. The definition of pH, which is based on logarithmic transformation of the hydrogen ion concentration ([H+]), has caused considerable disagreement regarding the appropriate method of describing average pH. The opinion that pH values must be transformed to [H+] values before averaging appears to be based on the concept of mixing solutions of different pH. In practice, however, the averaging of [H+] values will not provide the correct average pH because buffers present in natural waters have a greater effect on final pH than does dilution alone. For nearly all uses of pH in fisheries and aquaculture, pH values may be averaged directly. When pH data sets are transformed to [H+] to estimate average pH, extreme pH values will distort the average pH. Values of pH conform more closely to a normal distribution than do values of [H+], making the pH values more acceptable for use in statistical analysis. Moreover, electrochemical measurements of pH and many biological responses to [H+] are described by the Nernst equation, which states that the measured or observed response is linearly related to 10-fold changes in [H+]. Based on these considerations, pH rather than [H+] is usually the most appropriate variable for use in statistical analysis. *Corresponding author: [email protected] Received November 2, 2010; accepted February 7, 2011 Published online September 27, 2011 Temperature, salinity, hardness, pH, acidity, and alkalinity are fundamental variables that define the quality of water. Although all six variables have precise, unambiguous definitions, the last three variables are often misinterpreted in aquaculture and fisheries studies. In this paper, we explain the concepts of pH, acidity, and alkalinity, and we discuss practical relationships among those variables. We also discuss the concept of pH averaging as an expression of the central tendency of pH measurements. The concept of pH averaging is poorly understood, if not controversial, because many believe that pH values, which are log-transformed numbers, cannot be averaged directly. We argue that direct averaging of pH values is the simplest and most logical approach for most uses and that direct averaging is based on sound practical and statistical principles. THE pH CONCEPT The pH is an index of the hydrogen ion concentration ([H+]) in water. The [H+] affects most chemical and biological processes; thus, pH is an important variable in water quality endeavors. Water temperature probably is the only water quality variable that is measured more commonly than pH. The pH concept has its basis in the ionization of water:",
"title": ""
},
{
"docid": "70fafdedd05a40db5af1eabdf07d431c",
"text": "Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively.",
"title": ""
},
{
"docid": "f141bd66dc2a842c21f905e3e01fa93c",
"text": "In this paper, we develop the nonsubsampled contourlet transform (NSCT) and study its applications. The construction proposed in this paper is based on a nonsubsampled pyramid structure and nonsubsampled directional filter banks. The result is a flexible multiscale, multidirection, and shift-invariant image decomposition that can be efficiently implemented via the a trous algorithm. At the core of the proposed scheme is the nonseparable two-channel nonsubsampled filter bank (NSFB). We exploit the less stringent design condition of the NSFB to design filters that lead to a NSCT with better frequency selectivity and regularity when compared to the contourlet transform. We propose a design framework based on the mapping approach, that allows for a fast implementation based on a lifting or ladder structure, and only uses one-dimensional filtering in some cases. In addition, our design ensures that the corresponding frame elements are regular, symmetric, and the frame is close to a tight one. We assess the performance of the NSCT in image denoising and enhancement applications. In both applications the NSCT compares favorably to other existing methods in the literature",
"title": ""
},
{
"docid": "0774820345f37dd1ae474fc4da1a3a86",
"text": "Several diseases and disorders are treatable with therapeutic proteins, but some of these products may induce an immune response, especially when administered as multiple doses over prolonged periods. Antibodies are created by classical immune reactions or by the breakdown of immune tolerance; the latter is characteristic of human homologue products. Many factors influence the immunogenicity of proteins, including structural features (sequence variation and glycosylation), storage conditions (denaturation, or aggregation caused by oxidation), contaminants or impurities in the preparation, dose and length of treatment, as well as the route of administration, appropriate formulation and the genetic characteristics of patients. The clinical manifestations of antibodies directed against a given protein may include loss of efficacy, neutralization of the natural counterpart and general immune system effects (including allergy, anaphylaxis or serum sickness). An upsurge in the incidence of antibody-mediated pure red cell aplasia (PRCA) among patients taking one particular formulation of recombinant human erythropoietin (epoetin-alpha, marketed as Eprex(R)/Erypo(R); Johnson & Johnson) in Europe caused widespread concern. The PRCA upsurge coincided with removal of human serum albumin from epoetin-alpha in 1998 and its replacement with glycine and polysorbate 80. Although the immunogenic potential of this particular product may have been enhanced by the way the product was stored, handled and administered, it should be noted that the subcutaneous route of administration does not confer immunogenicity per se. The possible role of micelle (polysorbate 80 plus epoetin-alpha) formation in the PRCA upsurge with Eprex is currently being investigated.",
"title": ""
},
{
"docid": "1c9c30e3e007c2d11c6f5ebd0092050b",
"text": "Fatty acids are essential components of the dynamic lipid metabolism in cells. Fatty acids can also signal to intracellular pathways to trigger a broad range of cellular responses. Oleic acid is an abundant monounsaturated omega-9 fatty acid that impinges on different biological processes, but the mechanisms of action are not completely understood. Here, we report that oleic acid stimulates the cAMP/protein kinase A pathway and activates the SIRT1-PGC1α transcriptional complex to modulate rates of fatty acid oxidation. In skeletal muscle cells, oleic acid treatment increased intracellular levels of cyclic adenosine monophosphate (cAMP) that turned on protein kinase A activity. This resulted in SIRT1 phosphorylation at Ser-434 and elevation of its catalytic deacetylase activity. A direct SIRT1 substrate is the transcriptional coactivator peroxisome proliferator-activated receptor γ coactivator 1-α (PGC1α), which became deacetylated and hyperactive after oleic acid treatment. Importantly, oleic acid, but not other long chain fatty acids such as palmitate, increased the expression of genes linked to fatty acid oxidation pathway in a SIRT1-PGC1α-dependent mechanism. As a result, oleic acid potently accelerated rates of complete fatty acid oxidation in skeletal muscle cells. These results illustrate how a single long chain fatty acid specifically controls lipid oxidation through a signaling/transcriptional pathway. Pharmacological manipulation of this lipid signaling pathway might provide therapeutic possibilities to treat metabolic diseases associated with lipid dysregulation.",
"title": ""
}
] |
scidocsrr
|
0362bc28f18510c434c802902068035d
|
An ontology approach to object-based image retrieval
|
[
{
"docid": "5e0cff7f2b8e5aa8d112eacf2f149d60",
"text": "THEORIES IN AI FALL INT O TWO broad categories: mechanismtheories and contenttheories. Ontologies are content the ories about the sor ts of objects, properties of objects,and relations between objects tha t re possible in a specif ed domain of kno wledge. They provide potential ter ms for descr ibing our knowledge about the domain. In this article, we survey the recent de velopment of the f ield of ontologies in AI. We point to the some what different roles ontologies play in information systems, naturallanguage under standing, and knowledgebased systems. Most r esear ch on ontologies focuses on what one might characterize as domain factual knowledge, because kno wlede of that type is par ticularly useful in natural-language under standing. There is another class of ontologies that are important in KBS—one that helps in shar ing knoweldge about reasoning str ategies or pr oblemsolving methods. In a f ollow-up article, we will f ocus on method ontolo gies.",
"title": ""
},
{
"docid": "0084d9c69d79a971e7139ab9720dd846",
"text": "ÐRetrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation that provides a transformation from the raw pixel data to a small set of image regions that are coherent in color and texture. This aBlobworldo representation is created by clustering pixels in a joint color-texture-position feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions that roughly correspond to objects, we allow querying at the level of objects rather than global image properties. We present results indicating that querying for images using Blobworld produces higher precision than does querying using color and texture histograms of the entire image in cases where the image contains distinctive objects. Index TermsÐSegmentation and grouping, image retrieval, image querying, clustering, Expectation-Maximization.",
"title": ""
}
] |
[
{
"docid": "e751fdbc980c36b95c81f0f865bb5033",
"text": "In order to match shoppers with desired products and provide personalized promotions, whether in online or offline shopping worlds, it is critical to model both consumer preferences and price sensitivities simultaneously. Personalized preferences have been thoroughly studied in the field of recommender systems, though price (and price sensitivity) has received relatively little attention. At the same time, price sensitivity has been richly explored in the area of economics, though typically not in the context of developing scalable, working systems to generate recommendations. In this study, we seek to bridge the gap between large-scale recommender systems and established consumer theories from economics, and propose a nested feature-based matrix factorization framework to model both preferences and price sensitivities. Quantitative and qualitative results indicate the proposed personalized, interpretable and scalable framework is capable of providing satisfying recommendations (on two datasets of grocery transactions) and can be applied to obtain economic insights into consumer behavior.",
"title": ""
},
{
"docid": "aea0aeea95d251b5a7102825ad5c66ce",
"text": "The life time extension in the wireless sensor network (WSN) is the major concern in real time application, if the battery attached with the sensor node life is not optimized properly then the network life fall short. A protocol using a new evolutionary technique, cat swarm optimization (CSO), is designed and implemented in real time to minimize the intra-cluster distances between the cluster members and their cluster heads and optimize the energy distribution for the WSNs. We analyzed the performance of WSN protocol with the help of sensor nodes deployed in a field and grouped in to clusters. The novelty in our proposed scheme is considering the received signal strength, residual battery voltage and intra cluster distance of sensor nodes in cluster head selection with the help of CSO. The result is compared with the well-known protocol Low-energy adaptive clustering hierarchy-centralized (LEACH-C) and the swarm based optimization technique Particle swarm optimization (PSO). It was found that the battery energy level has been increased considerably of the traditional LEACH and PSO algorithm.",
"title": ""
},
{
"docid": "21e47bd70185299e94f8553ca7e60a6e",
"text": "Processes causing greenhouse gas (GHG) emissions benefit humans by providing consumer goods and services. This benefit, and hence the responsibility for emissions, varies by purpose or consumption category and is unevenly distributed across and within countries. We quantify greenhouse gas emissions associated with the final consumption of goods and services for 73 nations and 14 aggregate world regions. We analyze the contribution of 8 categories: construction, shelter, food, clothing, mobility, manufactured products, services, and trade. National average per capita footprints vary from 1 tCO2e/y in African countries to approximately 30/y in Luxembourg and the United States. The expenditure elasticity is 0.57. The cross-national expenditure elasticity for just CO2, 0.81, corresponds remarkably well to the cross-sectional elasticities found within nations, suggesting a global relationship between expenditure and emissions that holds across several orders of magnitude difference. On the global level, 72% of greenhouse gas emissions are related to household consumption, 10% to government consumption, and 18% to investments. Food accounts for 20% of GHG emissions, operation and maintenance of residences is 19%, and mobility is 17%. Food and services are more important in developing countries, while mobility and manufactured goods rise fast with income and dominate in rich countries. The importance of public services and manufactured goods has not yet been sufficiently appreciated in policy. Policy priorities hence depend on development status and country-level characteristics.",
"title": ""
},
{
"docid": "6fb1f05713db4e771d9c610fa9c9925d",
"text": "Objectives: Straddle injury represents a rare and complex injury to the female genito urinary tract (GUT). Overall prevention would be the ultimate goal, but due to persistent inhomogenity and inconsistency in definitions and guidelines, or suboptimal coding, the optimal study design for a prevention programme is still missing. Thus, medical records data were tested for their potential use for an injury surveillance registry and their impact on future prevention programmes. Design: Retrospective record analysis out of a 3 year period. Setting: All patients were treated exclusively by the first author. Patients: Six girls, median age 7 years, range 3.5 to 12 years with classical straddle injury. Interventions: Medical treatment and recording according to National and International Standards. Main Outcome Measures: All records were analyzed for accuracy in diagnosis and coding, surgical procedure, time and location of incident and examination findings. Results: All registration data sets were complete. A specific code for “straddle injury” in International Classification of Diseases (ICD) did not exist. Coding followed mainly reimbursement issues and specific information about the injury was usually expressed in an individual style. Conclusions: As demonstrated in this pilot, population based medical record data collection can play a substantial part in local injury surveillance registry and prevention initiatives planning.",
"title": ""
},
{
"docid": "5a85c72c5b9898b010f047ee99dba133",
"text": "A method to design arbitrary three-way power dividers with ultra-wideband performance is presented. The proposed devices utilize a broadside-coupled structure, which has three coupled layers. The method assumes general asymmetric coupled layers. The design approach exploits the three fundamental modes of propagation: even-even, odd-odd, and odd-even, and the conformal mapping technique to find the coupling factors between the different layers. The method is used to design 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 three-way power dividers. The designed devices feature a multilayer broadside-coupled microstrip-slot-microstrip configuration using elliptical-shaped structures. The developed power dividers have a compact size with an overall dimension of 20 mm 30 mm. The simulated and measured results of the manufactured devices show an insertion loss equal to the nominated value 1 dB. The return loss for the input/output ports of the devices is better than 17, 18, and 13 dB, whereas the isolation between the output ports is better than 17, 14, and 15 dB for the 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 dividers, respectively, across the 3.1-10.6-GHz band.",
"title": ""
},
{
"docid": "917154ffa5d9108fd07782d1c9a183ba",
"text": "Recommender systems for automatically suggested items of interest to users have become increasingly essential in fields where mass personalization is highly valued. The popular core techniques of such systems are collaborative filtering, content-based filtering and combinations of these. In this paper, we discuss hybrid approaches, using collaborative and also content data to address cold-start - that is, giving recommendations to novel users who have no preference on any items, or recommending items that no user of the community has seen yet. While there have been lots of studies on solving the item-side problems, solution for user-side problems has not been seen public. So we develop a hybrid model based on the analysis of two probabilistic aspect models using pure collaborative filtering to combine with users' information. The experiments with MovieLen data indicate substantial and consistent improvements of this model in overcoming the cold-start user-side problem.",
"title": ""
},
{
"docid": "70fd930a2a6504404bec67779cba71b2",
"text": "This article discusses the logical implementation of the media access control and the physical layer of 100 Gb/s Ethernet. The target are a MAC/PCS LSI, supporting MAC and physical coding sublayer, and a gearbox LSI, providing 10:4 parallel lane-width exchange inside an optical module. The two LSIs are connected by a 100 gigabit attachment unit interface, which consists of ten 10 Gb/s lines. We realized a MAC/PCS logical circuit with a low-frequency clock on a FPGA, whose size is 250 kilo LUTs with a 5.7 Mbit RAM, and the power consumption of the gearbox LSI estimated to become 2.3 W.",
"title": ""
},
{
"docid": "6ea91574db57616682cf2a9608b0ac0b",
"text": "METHODOLOGY AND PRINCIPAL FINDINGS\nOleuropein promoted cultured human follicle dermal papilla cell proliferation and induced LEF1 and Cyc-D1 mRNA expression and β-catenin protein expression in dermal papilla cells. Nuclear accumulation of β-catenin in dermal papilla cells was observed after oleuropein treatment. Topical application of oleuropein (0.4 mg/mouse/day) to C57BL/6N mice accelerated the hair-growth induction and increased the size of hair follicles in telogenic mouse skin. The oleuropein-treated mouse skin showed substantial upregulation of Wnt10b, FZDR1, LRP5, LEF1, Cyc-D1, IGF-1, KGF, HGF, and VEGF mRNA expression and β-catenin protein expression.\n\n\nCONCLUSIONS AND SIGNIFICANCE\nThese results demonstrate that topical oleuroepin administration induced anagenic hair growth in telogenic C57BL/6N mouse skin. The hair-growth promoting effect of oleuropein in mice appeared to be associated with the stimulation of the Wnt10b/β-catenin signaling pathway and the upregulation of IGF-1, KGF, HGF, and VEGF gene expression in mouse skin tissue.",
"title": ""
},
{
"docid": "94cf1976c10d632cfce12ce3f32be4cc",
"text": "In today’s economic turmoil, the pay-per-use pricing model of cloud computing, its flexibility and scalability and the potential for better security and availability levels are alluring to both SMEs and large enterprises. However, cloud computing is fraught with security risks which need to be carefully evaluated before any engagement in this area. This article elaborates on the most important risks inherent to the cloud such as information security, regulatory compliance, data location, investigative support, provider lock-in and disaster recovery. We focus on risk and control analysis in relation to a sample of Swiss companies with regard to their prospective adoption of public cloud services. We observe a sufficient degree of risk awareness with a focus on those risks that are relevant to the IT function to be migrated to the cloud. Moreover, the recommendations as to the adoption of cloud services depend on the company’s size with larger and more technologically advanced companies being better prepared for the cloud. As an exploratory first step, the results of this study would allow us to design and implement broader research into cloud computing risk management in Switzerland.",
"title": ""
},
{
"docid": "db1abd38db0295fc573bdfca2c2b19a3",
"text": "BACKGROUND\nBacterial vaginosis (BV) has been most consistently linked to sexual behaviour, and the epidemiological profile of BV mirrors that of established sexually transmitted infections (STIs). It remains a matter of debate however whether BV pathogenesis does actually involve sexual transmission of pathogenic micro-organisms from men to women. We therefore made a critical appraisal of the literature on BV in relation to sexual behaviour.\n\n\nDISCUSSION\nG. vaginalis carriage and BV occurs rarely with children, but has been observed among adolescent, even sexually non-experienced girls, contradicting that sexual transmission is a necessary prerequisite to disease acquisition. G. vaginalis carriage is enhanced by penetrative sexual contact but also by non-penetrative digito-genital contact and oral sex, again indicating that sex per se, but not necessarily coital transmission is involved. Several observations also point at female-to-male rather than at male-to-female transmission of G. vaginalis, presumably explaining the high concordance rates of G. vaginalis carriage among couples. Male antibiotic treatment has not been found to protect against BV, condom use is slightly protective, whereas male circumcision might protect against BV. BV is also common among women-who-have-sex-with-women and this relates at least in part to non-coital sexual behaviours. Though male-to-female transmission cannot be ruled out, overall there is little evidence that BV acts as an STD. Rather, we suggest BV may be considered a sexually enhanced disease (SED), with frequency of intercourse being a critical factor. This may relate to two distinct pathogenetic mechanisms: (1) in case of unprotected intercourse alkalinisation of the vaginal niche enhances a shift from lactobacilli-dominated microflora to a BV-like type of microflora and (2) in case of unprotected and protected intercourse mechanical transfer of perineal enteric bacteria is enhanced by coitus. A similar mechanism of mechanical transfer may explain the consistent link between non-coital sexual acts and BV. Similar observations supporting the SED pathogenetic model have been made for vaginal candidiasis and for urinary tract infection.\n\n\nSUMMARY\nThough male-to-female transmission cannot be ruled out, overall there is incomplete evidence that BV acts as an STI. We believe however that BV may be considered a sexually enhanced disease, with frequency of intercourse being a critical factor.",
"title": ""
},
{
"docid": "5528f1ee010e7fba440f1f7ff84a3e8e",
"text": "In presenting this thesis in partial fulfillment of the requirements for a Master's degree at the University of Washington, I agree that the Library shall make its copies freely available for inspection. I further agree that extensive copying of this thesis is allowable only for scholarly purposes, consistent with \"fair use\" as prescribed in the U.S. Copyright Law. Any other reproduction for any purposes or by any means shall not be allowed without my written permission. PREFACE Over the last several years, professionals from many different fields have come to the Human Interface Technology Laboratory (H.I.T.L) to discover and learn about virtual environments. In general, they are impressed by their experiences and express the tremendous potential the tool has in their respective fields. But the potentials are always projected far in the future, and the tool remains just a concept. This is justifiable because the quality of the visual experience is so much less than what people are used to seeing; high definition television, breathtaking special cinematographic effects and photorealistic computer renderings. Instead, the models in virtual environments are very simple looking; they are made of small spaces, filled with simple or abstract looking objects of little color distinctions as seen through displays of noticeably low resolution and at an update rate which leaves much to be desired. Clearly, for most applications, the requirements of precision have not been met yet with virtual interfaces as they exist today. However, there are a few domains where the relatively low level of the technology could be perfectly appropriate. In general, these are applications which require that the information be presented in symbolic or representational form. Having studied architecture, I knew that there are moments during the early part of the design process when conceptual decisions are made which require precisely the simple and representative nature available in existing virtual environments. This was a marvelous discovery for me because I had found a viable use for virtual environments which could be immediately beneficial to architecture, my shared area of interest. It would be further beneficial to architecture in that the virtual interface equipment I would be evaluating at the H.I.T.L. happens to be relatively less expensive and more practical than other configurations such as the \"Walkthrough\" at the University of North Carolina. The setup at the H.I.T.L. could be easily introduced into architectural firms because it takes up very little physical room (150 …",
"title": ""
},
{
"docid": "d10afc83c234c1c0531e23b29b5d8895",
"text": "BACKGROUND\nThe efficacy of new antihypertensive drugs has been questioned. We compared the effects of conventional and newer antihypertensive drugs on cardiovascular mortality and morbidity in elderly patients.\n\n\nMETHODS\nWe did a prospective, randomised trial in 6614 patients aged 70-84 years with hypertension (blood pressure > or = 180 mm Hg systolic, > or = 105 mm Hg diastolic, or both). Patients were randomly assigned conventional antihypertensive drugs (atenolol 50 mg, metoprolol 100 mg, pindolol 5 mg, or hydrochlorothiazide 25 mg plus amiloride 2.5 mg daily) or newer drugs (enalapril 10 mg or lisinopril 10 mg, or felodipine 2.5 mg or isradipine 2-5 mg daily). We assessed fatal stroke, fatal myocardial infarction, and other fatal cardiovascular disease. Analysis was by intention to treat.\n\n\nFINDINGS\nBlood pressure was decreased similarly in all treatment groups. The primary combined endpoint of fatal stroke, fatal myocardial infarction, and other fatal cardiovascular disease occurred in 221 of 2213 patients in the conventional drugs group (19.8 events per 1000 patient-years) and in 438 of 4401 in the newer drugs group (19.8 per 1000; relative risk 0.99 [95% CI 0.84-1.16], p=0.89). The combined endpoint of fatal and non-fatal stroke, fatal and non-fatal myocardial infarction, and other cardiovascular mortality occurred in 460 patients taking conventional drugs and in 887 taking newer drugs (0.96 [0.86-1.08], p=0.49).\n\n\nINTERPRETATION\nOld and new antihypertensive drugs were similar in prevention of cardiovascular mortality or major events. Decrease in blood pressure was of major importance for the prevention of cardiovascular events.",
"title": ""
},
{
"docid": "ee1e2400ed5c944826747a8e616b18c1",
"text": "Metastasis remains the greatest challenge in the clinical management of cancer. Cell motility is a fundamental and ancient cellular behaviour that contributes to metastasis and is conserved in simple organisms. In this Review, we evaluate insights relevant to human cancer that are derived from the study of cell motility in non-mammalian model organisms. Dictyostelium discoideum, Caenorhabditis elegans, Drosophila melanogaster and Danio rerio permit direct observation of cells moving in complex native environments and lend themselves to large-scale genetic and pharmacological screening. We highlight insights derived from each of these organisms, including the detailed signalling network that governs chemotaxis towards chemokines; a novel mechanism of basement membrane invasion; the positive role of E-cadherin in collective direction-sensing; the identification and optimization of kinase inhibitors for metastatic thyroid cancer on the basis of work in flies; and the value of zebrafish for live imaging, especially of vascular remodelling and interactions between tumour cells and host tissues. While the motility of tumour cells and certain host cells promotes metastatic spread, the motility of tumour-reactive T cells likely increases their antitumour effects. Therefore, it is important to elucidate the mechanisms underlying all types of cell motility, with the ultimate goal of identifying combination therapies that will increase the motility of beneficial cells and block the spread of harmful cells.",
"title": ""
},
{
"docid": "ec673efa5f837ba4c997ee7ccd845ce1",
"text": "Deep Neural Networks (DNNs) are hierarchical nonlinear architectures that have been widely used in artificial intelligence applications. However, these models are vulnerable to adversarial perturbations which add changes slightly and are crafted explicitly to fool the model. Such attacks will cause the neural network to completely change its classification of data. Although various defense strategies have been proposed, existing defense methods have two limitations. First, the discovery success rate is not very high. Second, existing methods depend on the output of a particular layer in a specific learning structure. In this paper, we propose a powerful method for adversarial samples using Large Margin Cosine Estimate(LMCE). By iteratively calculating the large-margin cosine uncertainty estimates between the model predictions, the results can be regarded as a novel measurement of model uncertainty estimation and is available to detect adversarial samples by training using a simple machine learning algorithm. Comparing it with the way in which adversar- ial samples are generated, it is confirmed that this measurement can better distinguish hostile disturbances. We modeled deep neural network attacks and established defense mechanisms against various types of adversarial attacks. Classifier gets better performance than the baseline model. The approach is validated on a series of standard datasets including MNIST and CIFAR −10, outperforming previous ensemble method with strong statistical significance. Experiments indicate that our approach generalizes better across different architectures and attacks.",
"title": ""
},
{
"docid": "a27a05cb00d350f9021b5c4f609d772c",
"text": "Traffic light detection from a moving vehicle is an important technology both for new safety driver assistance functions as well as for autonomous driving in the city. In this paper we present a machine learning framework for detection of traffic lights that can handle in realtime both day and night situations in a unified manner. A semantic segmentation method is employed to generate traffic light candidates, which are then confirmed and classified by a geometric and color features based classifier. Temporal consistency is enforced by using a tracking by detection method. We evaluate our method on a publicly available dataset recorded at daytime in order to compare to existing methods and we show similar performance. We also present an evaluation on two additional datasets containing more than 50 intersections with multiple traffic lights recorded both at day and during nighttime and we show that our method performs consistently in those situations.",
"title": ""
},
{
"docid": "64b3bcb65b9e890810561ab5700dec32",
"text": "In this paper, we address the fusion problem of two estimates, where the cross-correlation between the estimates is unknown. To solve the problem within the Bayesian framework, we assume that the covariance matrix has a prior distribution. We also assume that we know the covariance of each estimate, i.e., the diagonal block of the entire co-variance matrix (of the random vector consisting of the two estimates). We then derive the conditional distribution of the off-diagonal blocks, which is the cross-correlation of our interest. The conditional distribution happens to be the inverted matrix variate t-distribution. We can readily sample from this distribution and use a Monte Carlo method to compute the minimum mean square error estimate for the fusion problem. Simulations show that the proposed method works better than the popular covariance intersection method.",
"title": ""
},
{
"docid": "b47980c393116ac598a7f5c38fb402b9",
"text": "Skin cancer, the most common human malignancy, is primarily diagnosed visually by physicians . Classification with an automated method like CNN [2, 3] shows potential for diagnosing the skin cancer according to the medical photographs. By now, the deep convolutional neural networks can achieve the level of human dermatologist . This work is dedicated on developing a Deep Learning method for ISIC [5] 2017 Skin Lesion Detection Competition to classify the dermatology pictures, which is aiming at improving the diagnostic accuracy rate. As an result, it will improve the general level of the human health. The challenge falls into three sub-challenges, including Lesion Segmentation, Lesion Dermoscopic Feature Extraction and Lesion Classification. We focus on the Lesion Classification task. The proposed algorithm is comprised of three steps: (1) original images preprocessing, (2) modelling the processed images using CNN [2, 3] in Caffe [4] framework, (3) predicting the test images and calculating the scores that represent the likelihood of corresponding classification. The models are built on the source images are using the Caffe [4] framework. The scores in prediction step are obtained by two different models from the source images.",
"title": ""
},
{
"docid": "d7a2708fc70f6480d9026aeefce46610",
"text": "In order to study the differential protein expression in complex biological samples, strategies for rapid, highly reproducible and accurate quantification are necessary. Isotope labeling and fluorescent labeling techniques have been widely used in quantitative proteomics research. However, researchers are increasingly turning to label-free shotgun proteomics techniques for faster, cleaner, and simpler results. Mass spectrometry-based label-free quantitative proteomics falls into two general categories. In the first are the measurements of changes in chromatographic ion intensity such as peptide peak areas or peak heights. The second is based on the spectral counting of identified proteins. In this paper, we will discuss the technologies of these label-free quantitative methods, statistics, available computational software, and their applications in complex proteomics studies.",
"title": ""
},
{
"docid": "ed9d72566cdf3e353bf4b1e589bf85eb",
"text": "In the last few years progress has been made in understanding basic mechanisms involved in damage to the inner ear and various potential therapeutic approaches have been developed. It was shown that hair cell loss mediated by noise or toxic drugs may be prevented by antioxidants, inhibitors of intracellular stress pathways and neurotrophic factors/neurotransmission blockers. Moreover, there is hope that once hair cells are lost, their regeneration can be induced or that stem cells can be used to build up new hair cells. However, although tremendous progress has been made, most of the concepts discussed in this review are still in the \"animal stage\" and it is difficult to predict which approach will finally enter clinical practice. In my opinion it is highly probable that some concepts of hair cell protection will enter clinical practice first, while others, such as the use of stem cells to restore hearing, are still far from clinical utility.",
"title": ""
}
] |
scidocsrr
|
b175cd9f2ecd8c7706600f101a2e21dd
|
Recurrent Neural Network Postfilters for Statistical Parametric Speech Synthesis
|
[
{
"docid": "104c9347338f4e725e3c1907a4991977",
"text": "This paper derives a speech parameter generation algorithm for HMM-based speech synthesis, in which speech parameter sequence is generated from HMMs whose observation vector consists of spectral parameter vector and its dynamic feature vectors. In the algorithm, we assume that the state sequence (state and mixture sequence for the multi-mixture case) or a part of the state sequence is unobservable (i.e., hidden or latent). As a result, the algorithm iterates the forward-backward algorithm and the parameter generation algorithm for the case where state sequence is given. Experimental results show that by using the algorithm, we can reproduce clear formant structure from multi-mixture HMMs as compared with that produced from single-mixture HMMs.",
"title": ""
},
{
"docid": "d46594f40795de0feef71b480a53553f",
"text": "Feed-forward, Deep neural networks (DNN)-based text-tospeech (TTS) systems have been recently shown to outperform decision-tree clustered context-dependent HMM TTS systems [1, 4]. However, the long time span contextual effect in a speech utterance is still not easy to accommodate, due to the intrinsic, feed-forward nature in DNN-based modeling. Also, to synthesize a smooth speech trajectory, the dynamic features are commonly used to constrain speech parameter trajectory generation in HMM-based TTS [2]. In this paper, Recurrent Neural Networks (RNNs) with Bidirectional Long Short Term Memory (BLSTM) cells are adopted to capture the correlation or co-occurrence information between any two instants in a speech utterance for parametric TTS synthesis. Experimental results show that a hybrid system of DNN and BLSTM-RNN, i.e., lower hidden layers with a feed-forward structure which is cascaded with upper hidden layers with a bidirectional RNN structure of LSTM, can outperform either the conventional, decision tree-based HMM, or a DNN TTS system, both objectively and subjectively. The speech trajectory generated by the BLSTM-RNN TTS is fairly smooth and no dynamic constraints are needed.",
"title": ""
}
] |
[
{
"docid": "b7944edc9e6704cbf59489f112f46c11",
"text": "The basic paradigm of asset pricing is in vibrant f lux. The purely rational approach is being subsumed by a broader approach based upon the psychology of investors. In this approach, security expected returns are determined by both risk and misvaluation. This survey sketches a framework for understanding decision biases, evaluates the a priori arguments and the capital market evidence bearing on the importance of investor psychology for security prices, and reviews recent models. The best plan is . . . to profit by the folly of others. — Pliny the Elder, from John Bartlett, comp. Familiar Quotations, 9th ed. 1901. IN THE MUDDLED DAYS BEFORE THE RISE of modern finance, some otherwisereputable economists, such as Adam Smith, Irving Fisher, John Maynard Keynes, and Harry Markowitz, thought that individual psychology affects prices.1 What if the creators of asset-pricing theory had followed this thread? Picture a school of sociologists at the University of Chicago proposing the Deficient Markets Hypothesis: that prices inaccurately ref lect all available information. A brilliant Stanford psychologist, call him Bill Blunte, invents the Deranged Anticipation and Perception Model ~or DAPM!, in which proxies for market misvaluation are used to predict security returns. Imagine the euphoria when researchers discovered that these mispricing proxies ~such * Hirshleifer is from the Fisher College of Business, The Ohio State University. This survey was written for presentation at the American Finance Association Annual Meetings in New Orleans, January, 2001. I especially thank the editor, George Constantinides, for valuable comments and suggestions. I also thank Franklin Allen, the discussant, Nicholas Barberis, Robert Bloomfield, Michael Brennan, Markus Brunnermeier, Joshua Coval, Kent Daniel, Ming Dong, Jack Hirshleifer, Harrison Hong, Soeren Hvidkjaer, Ravi Jagannathan, Narasimhan Jegadeesh, Andrew Karolyi, Charles Lee, Seongyeon Lim, Deborah Lucas, Rajnish Mehra, Norbert Schwarz, Jayanta Sen, Tyler Shumway, René Stulz, Avanidhar Subrahmanyam, Siew Hong Teoh, Sheridan Titman, Yue Wang, Ivo Welch, and participants of the Dice Finance Seminar at Ohio State University for very helpful discussions and comments. 1 Smith analyzed how the “overweening conceit” of mankind caused labor to be underpriced in more enterprising pursuits. Young workers do not arbitrage away pay differentials because they are prone to overestimate their ability to succeed. Fisher wrote a book on money illusion; in The Theory of Interest ~~1930!, pp. 493–494! he argued that nominal interest rates systematically fail to adjust sufficiently for inf lation, and explained savings behavior in relation to self-control, foresight, and habits. Keynes ~1936! famously commented on animal spirits in stock markets. Markowitz ~1952! proposed that people focus on gains and losses relative to reference points, and that this helps explain the pricing of insurance and lotteries. THE JOURNAL OF FINANCE • VOL. LVI, NO. 4 • AUGUST 2001",
"title": ""
},
{
"docid": "39325a6b06c107fe7d06b958ebcb5f54",
"text": "Trunk movements in the frontal and sagittal planes were studied in 10 healthy males (18-35 yrs) during normal walking (1.0-2.5 m/s) and running (2.0-6.0 m/s) on a treadmill. Movements were recorded with a Selspot optoelectronic system. Directions, amplitudes and phase relationships to the stride cycle (defined by the leg movements) were analyzed for both linear and angular displacements. During one stride cycle the trunk displayed two oscillations in the vertical (mean net amplitude 2.5-9.5 cm) and horizontal, forward-backward directions (mean net amplitude 0.5-3 cm) and one oscillation in the lateral, side to side direction (mean net amplitude 2-6 cm). The magnitude and timing of the various oscillations varied in a different way with speed and mode of progression. Differences in amplitudes and timing of the movements at separate levels along the spine gave rise to angular oscillations with a similar periodicity as the linear displacements in both planes studied. The net angular trunk tilting in the frontal plane increased with speed from 3-10 degrees. The net forward-backward trunk inclination showed a small increase with speed up to 5 degrees in fast running. The mean forward inclination of the trunk increased from 6 degrees to about 13 degrees with speed. Peak inclination to one side occurred during the support phase of the leg on the same side. Peak forward inclination was reached at the initiation of the support phase in walking, whereas in running the peak inclination was in the opposite direction at this point. The adaptations of trunk movements to speed and mode of progression could be related to changing mechanical conditions and different demands on equilibrium control due to e.g. changes in support phase duration and leg movements.",
"title": ""
},
{
"docid": "15e440bc952db5b0ad71617e509770b9",
"text": "The task of recommending relevant scientific literature for a draft academic paper has recently received significant interest. In our effort to ease the discovery of scientific literature and augment scientific writing, we aim to improve the relevance of results based on a shallow semantic analysis of the source document and the potential documents to recommend. We investigate the utility of automatic argumentative and rhetorical annotation of documents for this purpose. Specifically, we integrate automatic Core Scientific Concepts (CoreSC) classification into a prototype context-based citation recommendation system and investigate its usefulness to the task. We frame citation recommendation as an information retrieval task and we use the categories of the annotation schemes to apply different weights to the similarity formula. Our results show interesting and consistent correlations between the type of citation and the type of sentence containing the relevant information.",
"title": ""
},
{
"docid": "d7c3f86e05eb471f7c0952173ae953ae",
"text": "Rigid robotic manipulators employ traditional sensors such as encoders or potentiometers to measure joint angles and determine end-effector position. Manipulators that are flexible, however, introduce motions that are much more difficult to measure. This is especially true for continuum manipulators that articulate by means of material compliance. In this paper, we present a vision based system for quantifying the 3-D shape of a flexible manipulator in real-time. The sensor system is validated for accuracy with known point measurements and for precision by estimating a known 3-D shape. We present two applications of the validated system relating to the open-loop control of a tendon driven continuum manipulator. In the first application, we present a new continuum manipulator model and use the sensor to quantify 3-D performance. In the second application, we use the shape sensor system for model parameter estimation in the absence of tendon tension information.",
"title": ""
},
{
"docid": "1d1f93011e83bcefd207c845b2edafcd",
"text": "Although single dialyzer use and reuse by chemical reprocessing are both associated with some complications, there is no definitive advantage to either in this respect. Some complications occur mainly at the first use of a dialyzer: a new cellophane or cuprophane membrane may activate the complement system, or a noxious agent may be introduced to the dialyzer during production or generated during storage. These agents may not be completely removed during the routine rinsing procedure. The reuse of dialyzers is associated with environmental contamination, allergic reactions, residual chemical infusion (rebound release), inadequate concentration of disinfectants, and pyrogen reactions. Bleach used during reprocessing causes a progressive increase in dialyzer permeability to larger molecules, including albumin. Reprocessing methods without the use of bleach are associated with progressive decreases in membrane permeability, particularly to larger molecules. Most comparative studies have not shown differences in mortality between centers reusing and those not reusing dialyzers, however, the largest cluster of dialysis-related deaths occurred with single-use dialyzers due to the presence of perfluorohydrocarbon introduced during the manufacturing process and not completely removed during preparation of the dialyzers before the dialysis procedure. The cost savings associated with reuse is substantial, especially with more expensive, high-flux synthetic membrane dialyzers. With reuse, some dialysis centers can afford to utilize more efficient dialyzers that are more expensive; consequently they provide a higher dose of dialysis and reduce mortality. Some studies have shown minimally higher morbidity with chemical reuse, depending on the method. Waste disposal is definitely decreased with the reuse of dialyzers, thus environmental impacts are lessened, particularly if reprocessing is done by heat disinfection. It is safe to predict that dialyzer reuse in dialysis centers will continue because it also saves money for the providers. Saving both time for the patient and money for the provider were the main motivations to design a new machine for daily home hemodialysis. The machine, developed in the 1990s, cleans and heat disinfects the dialyzer and lines in situ so they do not need to be changed for a month. In contrast, reuse of dialyzers in home hemodialysis patients treated with other hemodialysis machines is becoming less popular and is almost extinct.",
"title": ""
},
{
"docid": "f07d44c814bdb87ffffc42ace8fd53a4",
"text": "We describe a batch method that uses a sizeable fraction of the training set at each iteration, and that employs secondorder information. • To improve the learning process, we follow a multi-batch approach in which the batch changes at each iteration. • This inherently gives the algorithm a stochastic flavor that can cause instability in L-BFGS. • We show how to perform stable quasi-Newton updating in the multi-batch setting, illustrate the behavior of the algorithm in a distributed computing platform, and study its convergence properties for both the convex and nonconvex cases. Introduction min w∈Rd F (w) = 1 n n ∑ i=1 f (w;x, y) Idea: select a sizeable sample Sk ⊂ {1, . . . , n} at every iteration and perform quasi-Newton steps 1. Distributed computing setting: distributed gradient computation (with faults) 2. Multi-Batch setting: samples are changed at every iteration to accelerate learning Goal: show that stable quasi-Newton updating can be achieved in both settings without incurring extra computational cost, or special synchronization Issue: samples used at the beginning and at the end of every iteration are different • potentially harmful for quasi-Newton methods Key: controlled sampling • consecutive samples overlap Sk ∩ Sk+1 = Ok 6= ∅ • gradient differences based on this overlap – stable quasi-Newton updates Multi-Batch L-BFGS Method At the k-th iteration: • sample Sk ⊂ {1, . . . , n} chosen, and iterates updated via wk+1 = wk − αkHkgk k where gk k is the batch gradient g Sk k = 1 |Sk| ∑ i∈Sk ∇f ( wk;x , y ) and Hk is the inverse BFGS Hessian approximation Hk+1 =V T k HkVk + ρksks T k ρk = 1 yT k sk , Vk = 1− ρkyksk • to ensure consistent curvature pair updates sk+1 = wk+1 − wk, yk+1 = gk k+1 − g Ok k where gk k+1 and g Ok k are gradients based on the overlapping samples only Ok = Sk ∩ Sk+1 Sample selection:",
"title": ""
},
{
"docid": "c8dc167294292425ac070c6fa56e65c5",
"text": "Alongside developing systems for scalable machine learning and collaborative data science activities, there is an increasing trend toward publicly shared data science projects, hosted in general or dedicated hosting services, such as GitHub and DataHub. The artifacts of the hosted projects are rich and include not only text files, but also versioned datasets, trained models, project documents, etc. Under the fast pace and expectation of data science activities, model discovery, i.e., finding relevant data science projects to reuse, is an important task in the context of data management for end-to-end machine learning. In this paper, we study the task and present the ongoing work on ModelHub Discovery, a system for finding relevant models in hosted data science projects. Instead of prescribing a structured data model for data science projects, we take an information retrieval approach by decomposing the discovery task into three major steps: project query and matching, model comparison and ranking, and processing and building ensembles with returned models. We describe the motivation and desiderata, propose techniques, and present opportunities and challenges for model discovery for hosted data science projects.",
"title": ""
},
{
"docid": "3bba595fa3a3cd42ce9b3ca052930d55",
"text": "After about a decade of intense research, spurred by both economic and operational considerations, and by environmental concerns, energy efficiency has now become a key pillar in the design of communication networks. With the advent of the fifth generation of wireless networks, with millions more base stations and billions of connected devices, the need for energy-efficient system design and operation will be even more compelling. This survey provides an overview of energy-efficient wireless communications, reviews seminal and recent contribution to the state-of-the-art, including the papers published in this special issue, and discusses the most relevant research challenges to be addressed in the future.",
"title": ""
},
{
"docid": "0a58548ceecaa13e1c77a96b4d4685c4",
"text": "Ground vehicles equipped with monocular vision systems are a valuable source of high resolution image data for precision agriculture applications in orchards. This paper presents an image processing framework for fruit detection and counting using orchard image data. A general purpose image segmentation approach is used, including two feature learning algorithms; multi-scale Multi-Layered Perceptrons (MLP) and Convolutional Neural Networks (CNN). These networks were extended by including contextual information about how the image data was captured (metadata), which correlates with some of the appearance variations and/or class distributions observed in the data. The pixel-wise fruit segmentation output is processed using the Watershed Segmentation (WS) and Circular Hough Transform (CHT) algorithms to detect and count individual fruits. Experiments were conducted in a commercial apple orchard near Melbourne, Australia. The results show an improvement in fruit segmentation performance with the inclusion of metadata on the previously benchmarked MLP network. We extend this work with CNNs, bringing agrovision closer to the state-of-the-art in computer vision, where although metadata had negligible influence, the best pixel-wise F1-score of 0.791 was achieved. The WS algorithm produced the best apple detection and counting results, with a detection F1-score of 0.858. As a final step, image fruit counts were accumulated over multiple rows at the orchard and compared against the post-harvest fruit counts that were obtained from a grading and counting machine. The count estimates using CNN and WS resulted in the best performance for this dataset, with a squared correlation coefficient of r = 0.826.",
"title": ""
},
{
"docid": "c0584e11a64c6679ad43a0a91d92740d",
"text": "A challenge in teaching usability engineering is providing appropriate hands-on project experience. Students need projects that are realistic enough to address meaningful issues, but manageable within one semester. We describe our use of online case studies to motivate and model course projects in usability engineering. The cases illustrate scenario-based usability methods, and are accessed via a custom browser. We summarize the content and organization of the case studies, several case-based learning activities, and students' reactions to the activities. We conclude with a discussion of future directions for case studies in HCI education.",
"title": ""
},
{
"docid": "62b6c1caae1ff1e957a5377692898299",
"text": "We present a cognitively plausible novel framework capable of learning the grounding in visual semantics and the grammar of natural language commands given to a robot in a table top environment. The input to the system consists of video clips of a manually controlled robot arm, paired with natural language commands describing the action. No prior knowledge is assumed about the meaning of words, or the structure of the language, except that there are different classes of words (corresponding to observable actions, spatial relations, and objects and their observable properties). The learning process automatically clusters the continuous perceptual spaces into concepts corresponding to linguistic input. A novel relational graph representation is used to build connections between language and vision. As well as the grounding of language to perception, the system also induces a set of probabilistic grammar rules. The knowledge learned is used to parse new commands involving previously unseen objects.",
"title": ""
},
{
"docid": "10e88f0d1a339c424f7e0b8fa5b43c1e",
"text": "Hash functions play an important role in modern cryptography. This paper investigates optimisation techniques that have recently been proposed in the literature. A new VLSI architecture for the SHA-256 and SHA-512 hash functions is presented, which combines two popular hardware optimisation techniques, namely pipelining and unrolling. The SHA processors are developed for implementation on FPGAs, thereby allowing rapid prototyping of several designs. Speed/area results from these processors are analysed and are shown to compare favourably with other FPGA-based implementations, achieving the fastest data throughputs in the literature to date",
"title": ""
},
{
"docid": "611eacd767f1ea709c1c4aca7acdfcdb",
"text": "This paper presents a bi-directional converter applied in electric bike. The main structure is a cascade buck-boost converter, which transfers the energy stored in battery for driving motor, and can recycle the energy resulted from the back electromotive force (BEMF) to charge battery by changing the operation mode. Moreover, the proposed converter can also serve as a charger by connecting with AC line directly. Besides, the single-chip DSP TMS320F2812 is adopted as a control core to manage the switching behaviors of each mode and to detect the battery capacity. In this paper, the equivalent models of each mode and complete design considerations are all detailed. All the experimental results are used to demonstrate the feasibility.",
"title": ""
},
{
"docid": "b8b7abcef8e23f774bd4e74067a27e6f",
"text": "This note evaluates several hardware platforms and operating systems using a set of benchmarks that test memory bandwidth and various operating system features such as kernel entry/exit and file systems. The overall conclusion is that operating system performance does not seem to be improving at the same rate as the base speed of the underlying hardware. Copyright 1989 Digital Equipment Corporation d i g i t a l Western Research Laboratory 100 Hamilton Avenue Palo Alto, California 94301 USA",
"title": ""
},
{
"docid": "6876748abb097dcce370288388e0965c",
"text": "The design and manufacturing of pop-up books are mainly manual at present, but a number of the processes therein can benefit from computerization and automation. This paper studies one aspect of the design of pop-up books: the mathematical modelling and simulation of the pieces popping up as a book is opened. It developes the formulae for the essential parameters in the pop-up animation. This animation enables the designer to determine on a computer if a particular set-up is appropriate to the theme which the page is designed to express, removing the need for the laborious and time-consuming task of making manual prototypes",
"title": ""
},
{
"docid": "1de3364e104a85af05f4a910ede83109",
"text": "Activity theory holds that the human mind is the product of our interaction with people and artifacts in the context of everyday activity. Acting with Technology makes the case for activity theory as a basis for...",
"title": ""
},
{
"docid": "05145a1f9f1d1423acb705159ec28f5e",
"text": "We describe the first sub-quadratic sampling algorithm for the Multiplicative Attribute Graph Model (MAGM) of Kim and Leskovec (2010). We exploit the close connection between MAGM and the Kronecker Product Graph Model (KPGM) of Leskovec et al. (2010), and show that to sample a graph from a MAGM it suffices to sample small number of KPGM graphs and quilt them together. Under a restricted set of technical conditions our algorithm runs in O ( (log2(n)) 3 |E| ) time, where n is the number of nodes and |E| is the number of edges in the sampled graph. We demonstrate the scalability of our algorithm via extensive empirical evaluation; we can sample a MAGM graph with 8 million nodes and 20 billion edges in under 6 hours.",
"title": ""
},
{
"docid": "79e6d47a27d8271ae0eaa0526df241a7",
"text": "A DC-DC buck converter capable of handling loads from 20 μA to 100 mA and operating off a 2.8-4.2 V battery is implemented in a 45 nm CMOS process. In order to handle high battery voltages in this deeply scaled technology, multiple transistors are stacked in the power train. Switched-Capacitor DC-DC converters are used for internal rail generation for stacking and supplies for control circuits. An I-C DAC pulse width modulator with sleep mode control is proposed which is both area and power-efficient as compared with previously published pulse width modulator schemes. Both pulse frequency modulation (PFM) and pulse width modulation (PWM) modes of control are employed for the wide load range. The converter achieves a peak efficiency of 75% at 20 μA, 87.4% at 12 mA in PFM, and 87.2% at 53 mA in PWM.",
"title": ""
},
{
"docid": "715e5655651ed879f2439ed86e860bc9",
"text": "This paper presents a new permanent-magnet gear based on the cycloid gearing principle, which normally is characterized by an extreme torque density and a very high gearing ratio. An initial design of the proposed magnetic gear was designed, analyzed, and optimized with an analytical model regarding torque density. The results were promising as compared to other high-performance magnetic-gear designs. A test model was constructed to verify the analytical model.",
"title": ""
},
{
"docid": "2bb36d78294b15000b78acd7a0831762",
"text": "This study aimed to verify whether achieving a dist inctive academic performance is unlikely for students at high risk of smartphone addiction. Additionally, it verified whether this phenomenon was equally applicable to male and femal e students. After implementing systematic random sampling, 293 university students participated by completing an online survey questionnaire posted on the university’s stu dent information system. The survey questionnaire collected demographic information and responses to the Smartphone Addiction Scale-Short Version (SAS-SV) items. The results sho wed that male and female university students were equally susceptible to smartphone add iction. Additionally, male and female university students were equal in achieving cumulat ive GPAs with distinction or higher within the same levels of smartphone addiction. Fur thermore, undergraduate students who were at a high risk of smartphone addiction were le ss likely to achieve cumulative GPAs of distinction or higher.",
"title": ""
}
] |
scidocsrr
|
5ed37be0e4f614c80f76470b8848c91b
|
Automatic repair of buggy if conditions and missing preconditions with SMT
|
[
{
"docid": "57d0e046517cc669746d4ecda352dc3f",
"text": "This paper is about understanding the nature of bug fixing by analyzing thousands of bug fix transactions of software repositories. It then places this learned knowledge in the context of automated program repair. We give extensive empirical results on the nature of human bug fixes at a large scale and a fine granularity with abstract syntax tree differencing. We set up mathematical reasoning on the search space of automated repair and the time to navigate through it. By applying our method on 14 repositories of Java software and 89,993 versioning transactions, we show that not all probabilistic repair models are equivalent.",
"title": ""
},
{
"docid": "2cb8ef67eb09f9fdd8c07e562cff6996",
"text": "Patch generation is an essential software maintenance task because most software systems inevitably have bugs that need to be fixed. Unfortunately, human resources are often insufficient to fix all reported and known bugs. To address this issue, several automated patch generation techniques have been proposed. In particular, a genetic-programming-based patch generation technique, GenProg, proposed by Weimer et al., has shown promising results. However, these techniques can generate nonsensical patches due to the randomness of their mutation operations. To address this limitation, we propose a novel patch generation approach, Pattern-based Automatic program Repair (PAR), using fix patterns learned from existing human-written patches. We manually inspected more than 60,000 human-written patches and found there are several common fix patterns. Our approach leverages these fix patterns to generate program patches automatically. We experimentally evaluated PAR on 119 real bugs. In addition, a user study involving 89 students and 164 developers confirmed that patches generated by our approach are more acceptable than those generated by GenProg. PAR successfully generated patches for 27 out of 119 bugs, while GenProg was successful for only 16 bugs.",
"title": ""
},
{
"docid": "5680257be3ac330b19645017953f6fb4",
"text": "Debugging consumes significant time and effort in any major software development project. Moreover, even after the root cause of a bug is identified, fixing the bug is non-trivial. Given this situation, automated program repair methods are of value. In this paper, we present an automated repair method based on symbolic execution, constraint solving and program synthesis. In our approach, the requirement on the repaired code to pass a given set of tests is formulated as a constraint. Such a constraint is then solved by iterating over a layered space of repair expressions, layered by the complexity of the repair code. We compare our method with recently proposed genetic programming based repair on SIR programs with seeded bugs, as well as fragments of GNU Coreutils with real bugs. On these subjects, our approach reports a higher success-rate than genetic programming based repair, and produces a repair faster.",
"title": ""
}
] |
[
{
"docid": "85cdebb26246db1d5a9e6094b0a0c2e6",
"text": "The fast simulation of large networks of spiking neurons is a major task for the examination of biology-inspired vision systems. Networks of this type label features by synchronization of spikes and there is strong demand to simulate these e,ects in real world environments. As the calculations for one model neuron are complex, the digital simulation of large networks is not e>cient using existing simulation systems. Consequently, it is necessary to develop special simulation techniques. This article introduces a wide range of concepts for the di,erent parts of digital simulator systems for large vision networks and presents accelerators based on these foundations. c © 2002 Elsevier Science B.V. All rights",
"title": ""
},
{
"docid": "7b93d57ea77d234c507f8d155e518ebc",
"text": "A cascade of fully convolutional neural networks is proposed to segment multi-modal Magnetic Resonance (MR) images with brain tumor into background and three hierarchical regions: whole tumor, tumor core and enhancing tumor core. The cascade is designed to decompose the multi-class segmentation problem into a sequence of three binary segmentation problems according to the subregion hierarchy. The whole tumor is segmented in the first step and the bounding box of the result is used for the tumor core segmentation in the second step. The enhancing tumor core is then segmented based on the bounding box of the tumor core segmentation result. Our networks consist of multiple layers of anisotropic and dilated convolution filters, and they are combined with multi-view fusion to reduce false positives. Residual connections and multi-scale predictions are employed in these networks to boost the segmentation performance. Experiments with BraTS 2017 validation set show that the proposed method achieved average Dice scores of 0.7859, 0.9050, 0.8378 for enhancing tumor core, whole tumor and tumor core, respectively. The corresponding values for BraTS 2017 testing set were 0.7831, 0.8739, and 0.7748, respectively.",
"title": ""
},
{
"docid": "c4912e6187e5e64ec70dd4423f85474a",
"text": "Communication technologies are becoming increasingly diverse in form and functionality, making it important to identify which aspects of these technologies actually improve geographically distributed communication. Our study examines two potentially important aspects of communication technologies which appear in robot-mediated communication - physical embodiment and control of this embodiment. We studied the impact of physical embodiment and control upon interpersonal trust in a controlled laboratory experiment using three different videoconferencing settings: (1) a handheld tablet controlled by a local user, (2) an embodied system controlled by a local user, and (3) an embodied system controlled by a remote user (n = 29 dyads). We found that physical embodiment and control by the local user increased the amount of trust built between partners. These results suggest that both physical embodiment and control of the system influence interpersonal trust in mediated communication and have implications for future system designs.",
"title": ""
},
{
"docid": "5eb63e991a00290d5892d010d0b28fef",
"text": "In this paper we investigate deceptive defense strategies for web servers. Web servers are widely exploited resources in the modern cyber threat landscape. Often these servers are exposed in the Internet and accessible for a broad range of valid as well as malicious users. Common security strategies like firewalls are not sufficient to protect web servers. Deception based Information Security enables a large set of counter measures to decrease the efficiency of intrusions. In this work we depict several techniques out of the reconnaissance process of an attacker. We match these with deceptive counter measures. All proposed measures are implemented in an experimental web server with deceptive counter measure abilities. We also conducted an experiment with honeytokens and evaluated delay strategies against automated scanner tools.",
"title": ""
},
{
"docid": "397f1c1a01655098d8b35b04011400c7",
"text": "Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.",
"title": ""
},
{
"docid": "0084d9c69d79a971e7139ab9720dd846",
"text": "ÐRetrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation that provides a transformation from the raw pixel data to a small set of image regions that are coherent in color and texture. This aBlobworldo representation is created by clustering pixels in a joint color-texture-position feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions that roughly correspond to objects, we allow querying at the level of objects rather than global image properties. We present results indicating that querying for images using Blobworld produces higher precision than does querying using color and texture histograms of the entire image in cases where the image contains distinctive objects. Index TermsÐSegmentation and grouping, image retrieval, image querying, clustering, Expectation-Maximization.",
"title": ""
},
{
"docid": "67826169bd43d22679f93108aab267a2",
"text": "Nonnegative matrix factorization (NMF) has become a widely used tool for the analysis of high-dimensional data as it automatically extracts sparse and meaningful features from a set of nonnegative data vectors. We first illustrate this property of NMF on three applications, in image processing, text mining and hyperspectral imaging –this is the why. Then we address the problem of solving NMF, which is NP-hard in general. We review some standard NMF algorithms, and also present a recent subclass of NMF problems, referred to as near-separable NMF, that can be solved efficiently (that is, in polynomial time), even in the presence of noise –this is the how. Finally, we briefly describe some problems in mathematics and computer science closely related to NMF via the nonnegative rank.",
"title": ""
},
{
"docid": "fe6f1234505ddf5fab14cd22119b8388",
"text": "This paper deals with identifying the genre of a movie by analyzing just the visual features of its trailer. This task seems to be very trivial for a human; our endeavor is to create a vision system that can do the same, accurately. We discuss the approaches we take and our experimental observations. The contributions of this work are : (1) we propose a neural network (based on VGG) that can classify movie trailers based on their genres; (2) we release a curated dataset, called YouTube-Trailer Dataset, which has over 800 movie trailers spanning over 4 genres. We achieve an accuracy of 80.1% with the spatial features, and 85% with using LSTM and set these results as the benchmark for this dataset. We have made the source code publicly available.1",
"title": ""
},
{
"docid": "db9e922bcdffffc6586d10fa363b2e2d",
"text": "Mallomonas eoa TAKAHASHII was first described by TAKAHASHII, who found the alga in ditches at Tsuruoka Parc, North-East Japan (TAKAHASHII 1960, 1963, ASMUND & TAKAHASHII 1969) . He studied the alga by transmission electron microscopy and described its different kinds of scales . However, he did not report the presence of any cysts . In the spring of 1971 a massive development of Mallomonas occurred under the ice in Lake Trummen, central South Sweden . Scanning electron microscopy revealed that the predominant species consisted of Mallomonas eoa TAKAHASHII, which occurred together with Synura petersenii KoRSHIKOV . In April the cells of Mallomonas eoa developed cysts and were studied by light microscopy and scanning electron microscopy. In contrast with earlier techniques the scanning electron microscopy made it possible to study the structure of the scales in various parts of the cell and to relate the cysts to the cells . Such knowledge is of importance also for paleolimnological research . Data on the quantitative and qualitative findings are reported below .",
"title": ""
},
{
"docid": "f472388e050e80837d2d5129ba8a358b",
"text": "Voice control has emerged as a popular method for interacting with smart-devices such as smartphones, smartwatches etc. Popular voice control applications like Siri and Google Now are already used by a large number of smartphone and tablet users. A major challenge in designing a voice control application is that it requires continuous monitoring of user?s voice input through the microphone. Such applications utilize hotwords such as \"Okay Google\" or \"Hi Galaxy\" allowing them to distinguish user?s voice command and her other conversations. A voice control application has to continuously listen for hotwords which significantly increases the energy consumption of the smart-devices.\n To address this energy efficiency problem of voice control, we present AccelWord in this paper. AccelWord is based on the empirical evidence that accelerometer sensors found in today?s mobile devices are sensitive to user?s voice. We also demonstrate that the effect of user?s voice on accelerometer data is rich enough so that it can be used to detect the hotwords spoken by the user. To achieve the goal of low energy cost but high detection accuracy, we combat multiple challenges, e.g. how to extract unique signatures of user?s speaking hotwords only from accelerometer data and how to reduce the interference caused by user?s mobility.\n We finally implement AccelWord as a standalone application running on Android devices. Comprehensive tests show AccelWord has hotword detection accuracy of 85% in static scenarios and 80% in mobile scenarios. Compared to the microphone based hotword detection applications such as Google Now and Samsung S Voice, AccelWord is 2 times more energy efficient while achieving the accuracy of 98% and 92% in static and mobile scenarios respectively.",
"title": ""
},
{
"docid": "de45682fcc57257365ae2a35978b8694",
"text": "Colloidal particles play an important role in various areas of material and pharmaceutical sciences, biotechnology, and biomedicine. In this overview we describe micro- and nano-particles used for the preparation of polyelectrolyte multilayer capsules and as drug delivery vehicles. An essential feature of polyelectrolyte multilayer capsule preparations is the ability to adsorb polymeric layers onto colloidal particles or templates followed by dissolution of these templates. The choice of the template is determined by various physico-chemical conditions: solvent needed for dissolution, porosity, aggregation tendency, as well as release of materials from capsules. Historically, the first templates were based on melamine formaldehyde, later evolving towards more elaborate materials such as silica and calcium carbonate. Their advantages and disadvantages are discussed here in comparison to non-particulate templates such as red blood cells. Further steps in this area include development of anisotropic particles, which themselves can serve as delivery carriers. We provide insights into application of particles as drug delivery carriers in comparison to microcapsules templated on them.",
"title": ""
},
{
"docid": "8b5bf8cf3832ac9355ed5bef7922fb5c",
"text": "Determining one's own position by means of a smartphone is an important issue for various applications in the fields of personal navigation or location-based services. Places like large airports, shopping malls or extensive underground parking lots require personal navigation but satellite signals and GPS connection cannot be obtained. Thus, alternative or complementary systems are needed. In this paper a system concept to integrate a foot-mounted inertial measurement unit (IMU) with an Android smartphone is presented. We developed a prototype to demonstrate and evaluate the implementation of pedestrian strapdown navigation on a smartphone. In addition to many other approaches we also fuse height measurements from a barometric sensor in order to stabilize height estimation over time. A very low-cost single-chip IMU is used to demonstrate applicability of the outlined system concept for potential commercial applications. In an experimental study we compare the achievable accuracy with a commercially available IMU. The evaluation shows very competitive results on the order of a few percent of traveled distance. Comparing performance, cost and size of the presented IMU the outlined approach carries an enormous potential in the field of indoor pedestrian navigation.",
"title": ""
},
{
"docid": "374f64916e84c01c0a6df6629ab02dbd",
"text": "NASA Glenn Research Center, in collaboration with the aerospace industry and academia, has begun the development of technology for a future hybrid-wing body electric airplane with a turboelectric distributed propulsion (TeDP) system. It is essential to design a subscale system to emulate the TeDP power grid, which would enable rapid analysis and demonstration of the proof-of-concept of the TeDP electrical system. This paper describes how small electrical machines with their controllers can emulate all the components in a TeDP power train. The whole system model in Matlab/Simulink was first developed and tested in simulation, and the simulation results showed that system dynamic characteristics could be implemented by using the closed-loop control of the electric motor drive systems. Then we designed a subscale experimental system to emulate the entire power system from the turbine engine to the propulsive fans. Firstly, we built a system to emulate a gas turbine engine driving a generator, consisting of two permanent magnet (PM) motors with brushless motor drives, coupled by a shaft. We programmed the first motor and its drive to mimic the speed-torque characteristic of the gas turbine engine, while the second motor and drive act as a generator and produce a torque load on the first motor. Secondly, we built another system of two PM motors and drives to emulate a motor driving a propulsive fan. We programmed the first motor and drive to emulate a wound-rotor synchronous motor. The propulsive fan was emulated by implementing fan maps and flight conditions into the fourth motor and drive, which produce a torque load on the driving motor. The stator of each PM motor is designed to travel axially to change the coupling between rotor and stator. This feature allows the PM motor to more closely emulate a wound-rotor synchronous machine. These techniques can convert the plain motor system into a unique TeDP power grid emulator that enables real-time simulation performance using hardware-in-the-loop (HIL).",
"title": ""
},
{
"docid": "43ff7d61119cc7b467c58c9c2e063196",
"text": "Financial engineering such as trading decision is an emerging research area and also has great commercial potentials. A successful stock buying/selling generally occurs near price trend turning point. Traditional technical analysis relies on some statistics (i.e. technical indicators) to predict turning point of the trend. However, these indicators can not guarantee the accuracy of prediction in chaotic domain. In this paper, we propose an intelligent financial trading system through a new approach: learn trading strategy by probabilistic model from high-level representation of time series – turning points and technical indicators. The main contributions of this paper are two-fold. First, we utilize high-level representation (turning point and technical indicators). High-level representation has several advantages such as insensitive to noise and intuitive to human being. However, it is rarely used in past research. Technical indicator is the knowledge from professional investors, which can generally characterize the market. Second, by combining high-level representation with probabilistic model, the randomness and uncertainty of chaotic system is further reduced. In this way, we achieve great results (comprehensive experiments on S&P500 components) in a chaotic domain in which the prediction is thought impossible in the past. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9970a23aedeb1a613a0909c28c35222e",
"text": "Imaging radars incorporating digital beamforming (DBF) typically require a uniform linear antenna array (ULA). However, using a large number of parallel receivers increases system complexity and cost. A switched antenna array can provide a similar performance at a lower expense. This paper describes an active switched antenna array with 32 integrated planar patch antennas illuminating a cylindrical lens. The array can be operated over a frequency range from 73 GHz–81 GHz. Together with a broadband FMCW frontend (Frequency Modulated Continuous Wave) a DBF radar was implemented. The design of the array is presented together with measurement results.",
"title": ""
},
{
"docid": "e0b1056544c3dc5c3b6f5bc072a72831",
"text": "In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. In this work, we study unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery. We introduce a weight structure that is necessary for asymptotic convergence to the true sparse signal. With this structure, unfolded ISTA can attain a linear convergence, which is better than the sublinear convergence of ISTA/FISTA in general cases. Furthermore, we propose to incorporate thresholding in the network to perform support selection, which is easy to implement and able to boost the convergence rate both theoretically and empirically. Extensive simulations, including sparse vector recovery and a compressive sensing experiment on real image data, corroborate our theoretical results and demonstrate their practical usefulness. We have made our codes publicly available.2.",
"title": ""
},
{
"docid": "adeb7bdbe9e903ae7041f93682b0a27c",
"text": "Self -- Management systems are the main objective of Autonomic Computing (AC), and it is needed to increase the running system's reliability, stability, and performance. This field needs to investigate some issues related to complex systems such as, self-awareness system, when and where an error state occurs, knowledge for system stabilization, analyze the problem, healing plan with different solutions for adaptation without the need for human intervention. This paper focuses on self-healing which is the most important component of Autonomic Computing. Self-healing is a technique that aims to detect, analyze, and repair existing faults within the system. All of these phases are accomplished in real-time system. In this approach, the system is capable of performing a reconfiguration action in order to recover from a permanent fault. Moreover, self-healing system should have the ability to modify its own behavior in response to changes within the environment. Recursive neural network has been proposed and used to solve the main challenges of self-healing, such as monitoring, interpretation, resolution, and adaptation.",
"title": ""
},
{
"docid": "5b748e2bc26e3fab531f0f741f7de176",
"text": "Computer models are widely used to simulate real processes. Within the computer model, there always exist some parameters which are unobservable in the real process but need to be specified in the computer model. The procedure to adjust these unknown parameters in order to fit the model to observed data and improve its predictive capability is known as calibration. In traditional calibration, once the optimal calibration parameter set is obtained, it is treated as known for future prediction. Calibration parameter uncertainty introduced from estimation is not accounted for. We will present a Bayesian calibration approach for stochastic computer models. We account for these additional uncertainties and derive the predictive distribution for the real process. Two numerical examples are used to illustrate the accuracy of the proposed method.",
"title": ""
},
{
"docid": "83580c373e9f91b021d90f520011a5da",
"text": "Pathfinding for a single agent is the problem of planning a route from an initial location to a goal location in an environment, going around obstacles. Pathfinding for multiple agents also aims to plan such routes for each agent, subject to different constraints, such as restrictions on the length of each path or on the total length of paths, no self-intersecting paths, no intersection of paths/plans, no crossing/meeting each other. It also has variations for finding optimal solutions, e.g., with respect to the maximum path length, or the sum of plan lengths. These problems are important for many real-life applications, such as motion planning, vehicle routing, environmental monitoring, patrolling, computer games. Motivated by such applications, we introduce a formal framework that is general enough to address all these problems: we use the expressive high-level representation formalism and efficient solvers of the declarative programming paradigm Answer Set Programming. We also introduce heuristics to improve the computational efficiency and/or solution quality. We show the applicability and usefulness of our framework by experiments, with randomly generated problem instances on a grid, on a real-world road network, and on a real computer game terrain.",
"title": ""
}
] |
scidocsrr
|
0ec72d6eee7c539c0883c5f3977df19c
|
The Factor Structure of the System Usability Scale
|
[
{
"docid": "19a28d8bbb1f09c56f5c85be003a9586",
"text": "ABSTRACT: Five questionnaires for assessing the usability of a website were compared in a study with 123 participants. The questionnaires studied were SUS, QUIS, CSUQ, a variant of Microsoft’s Product Reaction Cards, and one that we have used in our Usability Lab for several years. Each participant performed two tasks on each of two websites: finance.yahoo.com and kiplinger.com. All five questionnaires revealed that one site was significantly preferred over the other. The data were analyzed to determine what the results would have been at different sample sizes from 6 to 14. At a sample size of 6, only 30-40% of the samples would have identified that one of the sites was significantly preferred. Most of the data reach an apparent asymptote at a sample size of 12, where two of the questionnaires (SUS and CSUQ) yielded the same conclusion as the full dataset at least 90% of the time.",
"title": ""
},
{
"docid": "6deff83de8ad1e0d08565129c5cefb8a",
"text": "Correlations between prototypical usability metrics from 90 distinct usability tests were strong when measured at the task-level (r between .44 and .60). Using test-level satisfaction ratings instead of task-level ratings attenuated the correlations (r between .16 and .24). The method of aggregating data from a usability test had a significant effect on the magnitude of the resulting correlations. The results of principal components and factor analyses on the prototypical usability metrics provided evidence for an underlying construct of general usability with objective and subjective factors.",
"title": ""
}
] |
[
{
"docid": "2b38ac7d46a1b3555fef49a4e02cac39",
"text": "We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.",
"title": ""
},
{
"docid": "ba7701a94880b59bbbd49fbfaca4b8c3",
"text": "Many rural roads lack sharp, smoothly curving edges and a homogeneous surface appearance, hampering traditional vision-based road-following methods. However, they often have strong texture cues parallel to the road direction in the form of ruts and tracks left by other vehicles. This paper describes an unsupervised algorithm for following ill-structured roads in which dominant texture orientations computed with Gabor wavelet filters vote for a consensus road vanishing point location. The technique is first described for estimating the direction of straight-road segments, then extended to curved and undulating roads by tracking the vanishing point indicated by a differential “strip” of voters moving up toward the nominal vanishing line. Finally, the vanishing point is used to constrain a search for the road boundaries by maximizing textureand color-based region discriminant functions. Results are shown for a variety of road scenes including gravel roads, dirt trails, and highways.",
"title": ""
},
{
"docid": "392a683cf9fdbd18c2ac6a46962a9911",
"text": "Recently, reinforcement learning has been successfully applied to the logical game of Go, various Atari games, and even a 3D game, Labyrinth, though it continues to have problems in sparse reward settings. It is difficult to explore, but also difficult to exploit, a small number of successes when learning policy. To solve this issue, the subgoal and option framework have been proposed. However, discovering subgoals online is too expensive to be used to learn options in large state spaces. We propose Micro-objective learning (MOL) to solve this problem. The main idea is to estimate how important a state is while training and to give an additional reward proportional to its importance. We evaluated our algorithm in two Atari games: Montezuma’s Revenge and Seaquest. With three experiments to each game, MOL significantly improved the baseline scores. Especially in Montezuma’s Revenge, MOL achieved two times better results than the previous state-of-the-art model.",
"title": ""
},
{
"docid": "cae661146bc0156af25d8014cb61ef0b",
"text": "The two critical factors distinguishing inventory management in a multifirm supply-chain context from the more traditional centrally planned perspective are incentive conflicts and information asymmetries. We study the well-known order quantity/reorder point (Q r) model in a two-player context, using a framework inspired by observations during a case study. We show how traditional allocations of decision rights to supplier and buyer lead to inefficient outcomes, and we use principal-agent models to study the effects of information asymmetries about setup cost and backorder cost, respectively. We analyze two “opposite” models of contracting on inventory policies. First, we derive the buyer’s optimal menu of contracts when the supplier has private information about setup cost, and we show how consignment stock can help reduce the impact of this information asymmetry. Next, we study consignment and assume the supplier cannot observe the buyer’s backorder cost. We derive the supplier’s optimal menu of contracts on consigned stock level and show that in this case, the supplier effectively has to overcompensate the buyer for the cost of each stockout. Our theoretical analysis and the case study suggest that consignment stock helps reduce cycle stock by providing the supplier with an additional incentive to decrease batch size, but simultaneously gives the buyer an incentive to increase safety stock by exaggerating backorder costs. This framework immediately points to practical recommendations on how supply-chain incentives should be realigned to overcome existing information asymmetries.",
"title": ""
},
{
"docid": "3b1a0eafe36176031b6463af4d962036",
"text": "Tasks that demand externalized attention reliably suppress default network activity while activating the dorsal attention network. These networks have an intrinsic competitive relationship; activation of one suppresses activity of the other. Consequently, many assume that default network activity is suppressed during goal-directed cognition. We challenge this assumption in an fMRI study of planning. Recent studies link default network activity with internally focused cognition, such as imagining personal future events, suggesting a role in autobiographical planning. However, it is unclear how goal-directed cognition with an internal focus is mediated by these opposing networks. A third anatomically interposed 'frontoparietal control network' might mediate planning across domains, flexibly coupling with either the default or dorsal attention network in support of internally versus externally focused goal-directed cognition, respectively. We tested this hypothesis by analyzing brain activity during autobiographical versus visuospatial planning. Autobiographical planning engaged the default network, whereas visuospatial planning engaged the dorsal attention network, consistent with the anti-correlated domains of internalized and externalized cognition. Critically, both planning tasks engaged the frontoparietal control network. Task-related activation of these three networks was anatomically consistent with independently defined resting-state functional connectivity MRI maps. Task-related functional connectivity analyses demonstrate that the default network can be involved in goal-directed cognition when its activity is coupled with the frontoparietal control network. Additionally, the frontoparietal control network may flexibly couple with the default and dorsal attention networks according to task domain, serving as a cortical mediator linking the two networks in support of goal-directed cognitive processes.",
"title": ""
},
{
"docid": "c042edd05232a996a119bfbeba71422e",
"text": "Supervised object detection and semantic segmentation require object or even pixel level annotations. When there exist image level labels only, it is challenging for weakly supervised algorithms to achieve accurate predictions. The accuracy achieved by top weakly supervised algorithms is still significantly lower than their fully supervised counterparts. In this paper, we propose a novel weakly supervised curriculum learning pipeline for multi-label object recognition, detection and semantic segmentation. In this pipeline, we first obtain intermediate object localization and pixel labeling results for the training images, and then use such results to train task-specific deep networks in a fully supervised manner. The entire process consists of four stages, including object localization in the training images, filtering and fusing object instances, pixel labeling for the training images, and task-specific network training. To obtain clean object instances in the training images, we propose a novel algorithm for filtering, fusing and classifying object instances collected from multiple solution mechanisms. In this algorithm, we incorporate both metric learning and density-based clustering to filter detected object instances. Experiments show that our weakly supervised pipeline achieves state-of-the-art results in multi-label image classification as well as weakly supervised object detection and very competitive results in weakly supervised semantic segmentation on MS-COCO, PASCAL VOC 2007 and PASCAL VOC 2012.",
"title": ""
},
{
"docid": "e08bc715d679ba0442883b4b0e481998",
"text": "Rheology, as a branch of physics, studies the deformation and flow of matter in response to an applied stress or strain. According to the materials’ behaviour, they can be classified as Newtonian or non-Newtonian (Steffe, 1996; Schramm, 2004). The most of the foodstuffs exhibit properties of non-Newtonian viscoelastic systems (Abang Zaidel et al., 2010). Among them, the dough can be considered as the most unique system from the point of material science. It is viscoelastic system which exhibits shear-thinning and thixotropic behaviour (Weipert, 1990). This behaviour is the consequence of dough complex structure in which starch granules (75-80%) are surrounded by three-dimensional protein (20-25%) network (Bloksma, 1990, as cited in Weipert, 2006). Wheat proteins are consisted of gluten proteins (80-85% of total wheat protein) which comprise of prolamins (in wheat gliadins) and glutelins (in wheat glutenins) and non gluten proteins (15-20% of the total wheat proteins) such as albumins and globulins (Veraverbeke & Delcour, 2002). Gluten complex is a viscoelastic protein responsible for dough structure formation. Among the cereal technologists, rheology is widely recognized as a valuable tool in quality assessment of flour. Hence, in the cereal scientific community, rheological measurements are generally employed throughout the whole processing chain in order to monitor the mechanical properties, molecular structure and composition of the material, to imitate materials’ behaviour during processing and to anticipate the quality of the final product (Dobraszczyk & Morgenstern, 2003). Rheology is particularly important technique in revealing the influence of flour constituents and additives on dough behaviour during breadmaking. There are many test methods available to measure rheological properties, which are commonly divided into empirical (descriptive, imitative) and fundamental (basic) (Scott Blair, 1958 as cited in Weipert, 1990). Although being criticized due to their shortcomings concerning inflexibility in defining the level of deforming force, usage of strong deformation forces, interpretation of results in relative non-SI units, large sample requirements and its impossibility to define rheological parameters such as stress, strain, modulus or viscosity (Weipert, 1990; Dobraszczyk & Morgenstern, 2003), empirical rheological measurements are still indispensable in the cereal quality laboratories. According to the empirical rheological parameters it is possible to determine the optimal flour quality for a particular purpose. The empirical techniques used for dough quality",
"title": ""
},
{
"docid": "ba533a610f95d44bf5416e17b07348dd",
"text": "It is argued that, hidden within the flow of signals from typical cameras, through image processing, to display media, is a homomorphic filter. While homomorphic filtering is often desirable, there are some occasions where it is not. Thus, cancellation of this implicit homomorphic filter is proposed, through the introduction of an antihomomorphic filter. This concept gives rise to the principle of quantigraphic image processing, wherein it is argued that most cameras can be modeled as an array of idealized light meters each linearly responsive to a semi-monotonic function of the quantity of light received, integrated over a fixed spectral response profile. This quantity depends only on the spectral response of the sensor elements in the camera. A particular class of functional equations, called comparametric equations, is introduced as a basis for quantigraphic image processing. These are fundamental to the analysis and processing of multiple images differing only in exposure. The \"gamma correction\" of an image is presented as a simple example of a comparametric equation, for which it is shown that the underlying quantigraphic function does not pass through the origin. Thus, it is argued that exposure adjustment by gamma correction is inherently flawed, and alternatives are provided. These alternatives, when applied to a plurality of images that differ only in exposure, give rise to a new kind of processing in the \"amplitude domain\". The theoretical framework presented in this paper is applicable to the processing of images from nearly all types of modern cameras. This paper is a much revised draft of a 1992 peer-reviewed but unpublished report by the author, entitled \"Lightspace and the Wyckoff principle.\"",
"title": ""
},
{
"docid": "b4803364e973142a82e1b3e5bea21f24",
"text": "Word2Vec is a widely used algorithm for extracting low-dimensional vector representations of words. It generated considerable excitement in the machine learning and natural language processing (NLP) communities recently due to its exceptional performance in many NLP applications such as named entity recognition, sentiment analysis, machine translation and question answering. State-of-the-art algorithms including those by Mikolov et al. have been parallelized for multi-core CPU architectures but are based on vector-vector operations that are memory-bandwidth intensive and do not efficiently use computational resources. In this paper, we improve reuse of various data structures in the algorithm through the use of minibatching, hence allowing us to express the problem using matrix multiply operations. We also explore different techniques to distribute word2vec computation across nodes in a compute cluster, and demonstrate good strong scalability up to 32 nodes. In combination, these techniques allow us to scale up the computation near linearly across cores and nodes, and process hundreds of millions of words per second, which is the fastest word2vec implementation to the best of our knowledge.",
"title": ""
},
{
"docid": "472605bc322f1fd2c90ad50baf19fffb",
"text": "Wireless sensor networks (WSNs) use the unlicensed industrial, scientific, and medical (ISM) band for transmissions. However, with the increasing usage and demand of these networks, the currently available ISM band does not suffice for their transmissions. This spectrum insufficiency problem has been overcome by incorporating the opportunistic spectrum access capability of cognitive radio (CR) into the existing WSN, thus giving birth to CR sensor networks (CRSNs). The sensor nodes in CRSNs depend on power sources that have limited power supply capabilities. Therefore, advanced and intelligent radio resource allocation schemes are very essential to perform dynamic and efficient spectrum allocation among sensor nodes and to optimize the energy consumption of each individual node in the network. Radio resource allocation schemes aim to ensure QoS guarantee, maximize the network lifetime, reduce the internode and internetwork interferences, etc. In this paper, we present a survey of the recent advances in radio resource allocation in CRSNs. Radio resource allocation schemes in CRSNs are classified into three major categories, i.e., centralized, cluster-based, and distributed. The schemes are further divided into several classes on the basis of performance optimization criteria that include energy efficiency, throughput maximization, QoS assurance, interference avoidance, fairness and priority consideration, and hand-off reduction. An insight into the related issues and challenges is provided, and future research directions are clearly identified.",
"title": ""
},
{
"docid": "bfe76736623dfc3271be4856f5dc2eef",
"text": "Fact-related information contained in fictional narratives may induce substantial changes in readers’ real-world beliefs. Current models of persuasion through fiction assume that these effects occur because readers are psychologically transported into the fictional world of the narrative. Contrary to general dual-process models of persuasion, models of persuasion through fiction also imply that persuasive effects of fictional narratives are persistent and even increase over time (absolute sleeper effect). In an experiment designed to test this prediction, 81 participants read either a fictional story that contained true as well as false assertions about realworld topics or a control story. There were large short-term persuasive effects of false information, and these effects were even larger for a group with a two-week assessment delay. Belief certainty was weakened immediately after reading but returned to baseline level after two weeks, indicating that beliefs acquired by reading fictional narratives are integrated into realworld knowledge.",
"title": ""
},
{
"docid": "147b207125fcda1dece25a6c5cd17318",
"text": "In this paper we present a neural network based system for automated e-mail filing into folders and antispam filtering. The experiments show that it is more accurate than several other techniques. We also investigate the effects of various feature selection, weighting and normalization methods, and also the portability of the anti-spam filter across different users.",
"title": ""
},
{
"docid": "cb1c0c62269e96555119bd7f8cd666aa",
"text": "The complexity of the visual world creates significant challenges for comprehensive visual understanding. In spite of recent successes in visual recognition, today’s vision systems would still struggle to deal with visual queries that require a deeper reasoning. We propose a knowledge base (KB) framework to handle an assortment of visual queries, without the need to train new classifiers for new tasks. Building such a large-scale multimodal KB presents a major challenge of scalability. We cast a large-scale MRF into a KB representation, incorporating visual, textual and structured data, as well as their diverse relations. We introduce a scalable knowledge base construction system that is capable of building a KB with half billion variables and millions of parameters in a few hours. Our system achieves competitive results compared to purpose-built models on standard recognition and retrieval tasks, while exhibiting greater flexibility in answering richer visual queries.",
"title": ""
},
{
"docid": "a5cd94446abfc46c6d5c4e4e376f1e0a",
"text": "Commitment problem in credit market and its eãects on economic growth are discussed. Completions of investment projects increase capital stock of the economy. These projects require credits which are ånanced by ånacial intermediaries. A simpliåed credit model of Dewatripont and Maskin is used to describe the ånancing process, in which the commitment problem or the \\soft budget constraint\" problem arises. However, in dynamic general equilibrium setup with endougenous determination of value and cost of projects, there arise multiple equilibria in the project ånancing model, namely reånancing equilirium and no-reånancing equilibrium. The former leads the economy to the stationary state with smaller capital stock level than the latter. Both the elimination of reånancing equilibrium and the possibility of \\Animal Spirits Cycles\" equilibrium are also discussed.",
"title": ""
},
{
"docid": "1cf029e7284359e3cdbc12118a6d4bf5",
"text": "Simultaneous localization and mapping (SLAM) is the process by which a mobile robot can build a map of the environment and, at the same time, use this map to compute its location. The past decade has seen rapid and exciting progress in solving the SLAM problem together with many compelling implementations of SLAM methods. The great majority of work has focused on improving computational efficiency while ensuring consistent and accurate estimates for the map and vehicle pose. However, there has also been much research on issues such as nonlinearity, data association, and landmark characterization, all of which are vital in achieving a practical and robust SLAM implementation. This tutorial focuses on the recursive Bayesian formulation of the SLAM problem in which probability distributions or estimates of absolute or relative locations of landmarks and vehicle pose are obtained. Part I of this tutorial (IEEE Robotics & Auomation Magazine, vol. 13, no. 2) surveyed the development of the essential SLAM algorithm in state-space and particle-filter form, described a number of key implementations, and cited locations of source code and real-world data for evaluation of SLAM algorithms. Part II of this tutorial (this article), surveys the current state of the art in SLAM research with a focus on three key areas: computational complexity, data association, and environment representation. Much of the mathematical notation and essential concepts used in this article are defined in Part I of this tutorial and, therefore, are not repeated here. SLAM, in its naive form, scales quadratically with the number of landmarks in a map. For real-time implementation, this scaling is potentially a substantial limitation in the use of SLAM methods. The complexity section surveys the many approaches that have been developed to reduce this complexity. These include linear-time state augmentation, sparsification in information form, partitioned updates, and submapping methods. A second major hurdle to overcome in the implementation of SLAM methods is to correctly associate observations of landmarks with landmarks held in the map. Incorrect association can lead to catastrophic failure of the SLAM algorithm. Data association is particularly important when a vehicle returns to a previously mapped region after a long excursion, the so-called loop-closure problem. The data association section surveys current data association methods used in SLAM. These include batch-validation methods that exploit constraints inherent in the SLAM formulation, appearance-based methods, and multihypothesis techniques. The third development discussed in this tutorial is the trend towards richer appearance-based models of landmarks and maps. While initially motivated by problems in data association and loop closure, these methods have resulted in qualitatively different methods of describing the SLAM problem, focusing on trajectory estimation rather than landmark estimation. The environment representation section surveys current developments in this area along a number of lines, including delayed mapping, the use of nongeometric landmarks, and trajectory estimation methods. SLAM methods have now reached a state of considerable maturity. Future challenges will center on methods enabling large-scale implementations in increasingly unstructured environments and especially in situations where GPS-like solutions are unavailable or unreliable: in urban canyons, under foliage, under water, or on remote planets.",
"title": ""
},
{
"docid": "0171c8e352b5236ead1a59f38dffc94d",
"text": "World Wide Web Consortium (W3C) is the international standards organization for the World Wide Web (www). It develops standards, specifications and recommendations to enhance the interoperability and maximize consensus about the content of the web and define major parts of what makes the World Wide Web work. Phishing is a type of Internet scams that seeks to get a user‟s credentials by fraud websites, such as passwords, credit card numbers, bank account details and other sensitive information. There are some characteristics in webpage source code that distinguish phishing websites from legitimate websites and violate the w3c standards, so we can detect the phishing attacks by check the webpage and search for these characteristics in the source code file if it exists or not. In this paper, we propose a phishing detection approach based on checking the webpage source code, we extract some phishing characteristics out of the W3C standards to evaluate the security of the websites, and check each character in the webpage source code, if we find a phishing character, we will decrease from the initial secure weight. Finally we calculate the security percentage based on the final weight, the high percentage indicates secure website and others indicates the website is most likely to be a phishing website. We check two webpage source codes for legitimate and phishing websites and compare the security percentages between them, we find the phishing website is less security percentage than the legitimate website; our approach can detect the phishing website based on checking phishing characteristics in the webpage source code.",
"title": ""
},
{
"docid": "fdf979667641e1447f237eb25605c76b",
"text": "A green synthesis of highly stable gold and silver nanoparticles (NPs) using arabinoxylan (AX) from ispaghula (Plantago ovata) seed husk is being reported. The NPs were synthesized by stirring a mixture of AX and HAuCl(4)·H(2)O or AgNO(3), separately, below 100 °C for less than an hour, where AX worked as the reducing and the stabilizing agent. The synthesized NPs were characterized by surface plasmon resonance (SPR) spectroscopy, transmission electron microscopy (TEM), atomic force microscopy (AFM), and X-ray diffraction (XRD). The particle size was (silver: 5-20 nm and gold: 8-30 nm) found to be dependent on pH, temperature, reaction time and concentrations of AX and the metal salts used. The NPs were poly-dispersed with a narrow range. They were stable for more than two years time.",
"title": ""
},
{
"docid": "ecd99c9f87e1c5e5f529cb5fcbb206f2",
"text": "The concept of supply chain is about managing coordinated information and material flows, plant operations, and logistics. It provides flexibility and agility in responding to consumer demand shifts without cost overlays in resource utilization. The fundamental premise of this philosophy is; synchronization among multiple autonomous business entities represented in it. That is, improved coordination within and between various supply-chain members. Increased coordination can lead to reduction in lead times and costs, alignment of interdependent decision-making processes, and improvement in the overall performance of each member as well as the supply chain. Describes architecture to create the appropriate structure, install proper controls, and implement principles of optimization to synchronize the supply chain. A supply-chain model based on a collaborative system approach is illustrated utilizing the example of the textile industry. process flexibility and coordination of processes across many sites. More and more organizations are promoting employee empowerment and the need for rules-based, real-time decision support systems to attain organizational and process flexibility, as well as to respond to competitive pressure to introduce new products more quickly, cheaply and of improved quality. The underlying philosophy of managing supply chains has evolved to respond to these changing business trends. Supply-chain management phenomenon has received the attention of researchers and practitioners in various topics. In the earlier years, the emphasis was on materials planning utilizing materials requirements planning techniques, inventory logistics management with one warehouse multi-retailer distribution system, and push and pull operation techniques for production systems. In the last few years, however, there has been a renewed interest in designing and implementing integrated systems, such as enterprise resource planning, multi-echelon inventory, and synchronous-flow manufacturing, respectively. A number of factors have contributed to this shift. First, there has been a realization that better planning and management of complex interrelated systems, such as materials planning, inventory management, capacity planning, logistics, and production systems will lead to overall improvement in enterprise productivity. Second, advances in information and communication technologies complemented by sophisticated decision support systems enable the designing, implementing and controlling of the strategic and tactical strategies essential to delivery of integrated systems. In the next section, a framework that offers an unified approach to dealing with enterprise related problems is presented. A framework for analysis of enterprise integration issues As mentioned in the preceding section, the availability of advanced production and logistics management systems has the potential of fundamentally influencing enterprise integration issues. The motivation in pursuing research issues described in this paper is to propose a framework that enables dealing with these effectively. The approach suggested in this paper utilizing supply-chain philosophy for enterprise integration proposes domain independent problem solving and modeling, and domain dependent analysis and implementation. The purpose of the approach is to ascertain characteristics of the problem independent of the specific problem environment. Consequently, the approach delivers solution(s) or the solution method that are intrinsic to the problem and not its environment. Analysis methods help to understand characteristics of the solution methodology, as well as providing specific guarantees of effectiveness. Invariably, insights gained from these analyses can be used to develop effective problem solving tools and techniques for complex enterprise integration problems. The discussion of the framework is organized as follows. First, the key guiding principles of the proposed framework on which a supply chain ought to be built are outlined. Then, a cooperative supply-chain (CSC) system is described as a special class of a supply-chain network implementation. Next, discussion on a distributed problemsolving strategy that could be employed in integrating this type of system is presented. Following this, key components of a CSC system are described. Finally, insights on modeling a CSC system are offered. Key modeling principles are elaborated through two distinct modeling approaches in the management science discipline. Supply chain guiding principles Firms have increasingly been adopting enterprise/supply-chain management techniques in order to deal with integration issues. To focus on these integration efforts, the following guiding principles for the supply-chain framework are proposed. These principles encapsulate trends in production and logistics management that a supplychain arrangement may be designed to capture. . Supply chain is a cooperative system. The supply-chain arrangement exists on cooperation among its members. Cooperation occurs in many forms, such as sharing common objectives and goals for the group entity; utilizing joint policies, for instance in marketing and production; setting up common budgets, cost and price structures; and identifying commitments on capacity, production plans, etc. . Supply chain exists on the group dynamics of its members. The existence of a supply chain is dependent on the interaction among its members. This interaction occurs in the form of exchange of information with regard to input, output, functions and controls, such as objectives and goals, and policies. By analyzing this [ 291 ] Charu Chandra and Sameer Kumar Enterprise architectural framework for supply-chain integration Industrial Management & Data Systems 101/6 [2001] 290±303 information, members of a supply chain may choose to modify their behavior attuned with group expectations. . Negotiation and compromise are norms of operation in a supply chain. In order to realize goals and objectives of the group, members negotiate on commitments made to one another for price, capacity, production plans, etc. These negotiations often lead to compromises by one or many members on these issues, leading up to realization of sub-optimal goals and objectives by members. . Supply-chain system solutions are Paretooptimal (satisficing), not optimizing. Supply-chain problems similar to many real world applications involve several objective functions of its members simultaneously. In all such applications, it is extremely rare to have one feasible solution that simultaneously optimizes all of the objective functions. Typically, optimizing one of the objective functions has the effect of moving another objective function away from its most desirable value. These are the usual conflicts among the objective functions in the multiobjective models. As a multi-objective problem, the supply-chain model produces non-dominated or Pareto-optimal solutions. That is, solutions for a supplychain problem do not leave any member worse-off at the expense of another. . Integration in supply chain is achieved through synchronization. Integration across the supply chain is achieved through synchronization of activities at the member entity and aggregating its impact through process, function, business, and on to enterprise levels, either at the member entity or the group entity. Thus, by synchronization of supply-chain components, existing bottlenecks in the system are eliminated, while future ones are prevented from occurring. A cooperative supply-chain A supply-chain network depicted in Figure 1 can be a complex web of systems, sub-systems, operations, activities, and their relationships to one another, belonging to its various members namely, suppliers, carriers, manufacturing plants, distribution centers, retailers, and consumers. The design, modeling and implementation of such a system, therefore, can be difficult, unless various parts of it are cohesively tied to the whole. The concept of a supply-chain is about managing coordinated information and material flows, plant operations, and logistics through a common set of principles, strategies, policies, and performance metrics throughout its developmental life cycle (Lee and Billington, 1993). It provides flexibility and agility in responding to consumer demand shifts with minimum cost overlays in resource utilization. The fundamental premise of this philosophy is synchronization among multiple autonomous entities represented in it. That is, improved coordination within and between various supply-chain members. Coordination is achieved within the framework of commitments made by members to each other. Members negotiate and compromise in a spirit of cooperation in order to meet these commitments. Hence, the label(CSC). Increased coordination can lead to reduction in lead times and costs, alignment of interdependent decisionmaking processes, and improvement in the overall performance of each member, as well as the supply-chain (group) (Chandra, 1997; Poirier, 1999; Tzafastas and Kapsiotis, 1994). A generic textile supply chain has for its primary raw material vendors, cotton growers and/or chemical suppliers, depending upon whether the end product is cotton, polyester or some combination of cotton and polyester garment. Secondary raw material vendors are suppliers of accessories such as, zippers, buttons, thread, garment tags, etc. Other tier suppliers in the complete pipeline are: fiber manufacturers for producing the polyester or cotton fiber yarn; textile manufacturers for weaving and dying yarn into colored textile fabric; an apparel maker for cutting, sewing and packing the garment; a distribution center for merchandising the garment; and a retailer selling the brand name garment to consumers at a shopping mall or center. Synchronization of the textile supply chain is achieved through coordination primarily of: . replenishment schedules that have be",
"title": ""
},
{
"docid": "5db5bed638cd8c5c629f9bebef556730",
"text": "The health benefits of garlic likely arise from a wide variety of components, possibly working synergistically. The complex chemistry of garlic makes it plausible that variations in processing can yield quite different preparations. Highly unstable thiosulfinates, such as allicin, disappear during processing and are quickly transformed into a variety of organosulfur components. The efficacy and safety of these preparations in preparing dietary supplements based on garlic are also contingent on the processing methods employed. Although there are many garlic supplements commercially available, they fall into one of four categories, i.e., dehydrated garlic powder, garlic oil, garlic oil macerate and aged garlic extract (AGE). Garlic and garlic supplements are consumed in many cultures for their hypolipidemic, antiplatelet and procirculatory effects. In addition to these proclaimed beneficial effects, some garlic preparations also appear to possess hepatoprotective, immune-enhancing, anticancer and chemopreventive activities. Some preparations appear to be antioxidative, whereas others may stimulate oxidation. These additional biological effects attributed to AGE may be due to compounds, such as S-allylcysteine, S-allylmercaptocysteine, N(alpha)-fructosyl arginine and others, formed during the extraction process. Although not all of the active ingredients are known, ample research suggests that several bioavailable components likely contribute to the observed beneficial effects of garlic.",
"title": ""
},
{
"docid": "2c4a2d41653f05060ff69f1c9ad7e1a6",
"text": "Until recently the information technology (IT)-centricity was the prevailing paradigm in cyber security that was organized around confidentiality, integrity and availability of IT assets. Despite of its widespread usage, the weakness of IT-centric cyber security became increasingly obvious with the deployment of very large IT infrastructures and introduction of highly mobile tactical missions where the IT-centric cyber security was not able to take into account the dynamics of time and space bound behavior of missions and changes in their operational context. In this paper we will show that the move from IT-centricity towards to the notion of cyber attack resilient missions opens new opportunities in achieving the completion of mission goals even if the IT assets and services that are supporting the missions are under cyber attacks. The paper discusses several fundamental architectural principles of achieving cyber attack resilience of missions, including mission-centricity, survivability through adaptation, synergistic mission C2 and mission cyber security management, and the real-time temporal execution of the mission tasks. In order to achieve the overall system resilience and survivability under a cyber attack, both, the missions and the IT infrastructure are considered as two interacting adaptable multi-agent systems. While the paper is mostly concerned with the architectural principles of achieving cyber attack resilient missions, several models and algorithms that support resilience of missions are discussed in fairly detailed manner.",
"title": ""
}
] |
scidocsrr
|
07e4b7aa9c45c55ad067b1c298f3bacb
|
Business Process Modeling: Current Issues and Future Challenges
|
[
{
"docid": "49db1291f3f52a09037d6cfd305e8b5f",
"text": "This paper examines cognitive beliefs and affect influencing ones intention to continue using (continuance) information systems (IS). Expectationconfirmation theory is adapted from the consumer behavior literature and integrated with theoretical and empirical findings from prior IS usage research to theorize a model of IS continuance. Five research hypotheses derived from this model are empirically validated using a field survey of online banking users. The results suggest that users continuance intention is determined by their satisfaction with IS use and perceived usefulness of continued IS use. User satisfaction, in turn, is influenced by their confirmation of expectation from prior IS use and perceived usefulness. Postacceptance perceived usefulness is influenced by Ron Weber was the accepting senior editor for this paper. users confirmation level. This study draws attention to the substantive differences between acceptance and continuance behaviors, theorizes and validates one of the earliest theoretical models of IS continuance, integrates confirmation and user satisfaction constructs within our current understanding of IS use, conceptualizes and creates an initial scale for measuring IS continuance, and offers an initial explanation for the acceptancediscontinuance anomaly.",
"title": ""
},
{
"docid": "f66854fd8e3f29ae8de75fc83d6e41f5",
"text": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.",
"title": ""
}
] |
[
{
"docid": "2620ce1c5ef543fded3a02dfb9e5c3f8",
"text": "Artificial bee colony (ABC) is the one of the newest nature inspired heuristics for optimization problem. Like the chaos in real bee colony behavior, this paper proposes new ABC algorithms that use chaotic maps for parameter adaptation in order to improve the convergence characteristics and to prevent the ABC to get stuck on local solutions. This has been done by using of chaotic number generators each time a random number is needed by the classical ABC algorithm. Seven new chaotic ABC algorithms have been proposed and different chaotic maps have been analyzed in the benchmark functions. It has been detected that coupling emergent results in different areas, like those of ABC and complex dynamics, can improve the quality of results in some optimization problems. It has been also shown that, the proposed methods have somewhat increased the solution quality, that is in some cases they improved the global searching capability by escaping the local solutions. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "465a5d9a2fe72abaf1fb8c2c041a9b64",
"text": "A huge number of academic papers are coming out from a lot of conferences and journals these days. In these circumstances, most researchers rely on key-based search or browsing through proceedings of top conferences and journals to find their related work. To ease this difficulty, we propose a Personalized Academic Research Paper Recommendation System, which recommends related articles, for each researcher, that may be interesting to her/him. In this paper, we first introduce our web crawler to retrieve research papers from the web. Then, we define similarity between two research papers based on the text similarity between them. Finally, we propose our recommender system developed using collaborative filtering methods. Our evaluation results demonstrate that our system recommends good quality research papers.",
"title": ""
},
{
"docid": "a8b5f7a5ab729a7f1664c5a22f3b9d9b",
"text": "The smart grid is an electronically controlled electrical grid that connects power generation, transmission, distribution, and consumers using information communication technologies. One of the key characteristics of the smart grid is its support for bi-directional information flow between the consumer of electricity and the utility provider. This two-way interaction allows electricity to be generated in real-time based on consumers’ demands and power requests. As a result, consumer privacy becomes an important concern when collecting energy usage data with the deployment and adoption of smart grid technologies. To protect such sensitive information it is imperative that privacy protection mechanisms be used to protect the privacy of smart grid users. We present an analysis of recently proposed smart grid privacy solutions and identify their strengths and weaknesses in terms of their implementation complexity, efficiency, robustness, and simplicity.",
"title": ""
},
{
"docid": "71a65ff432ae4b53085ca5c923c29a95",
"text": "Data provenance is essential for debugging query results, auditing data in cloud environments, and explaining outputs of Big Data analytics. A well-established technique is to represent provenance as annotations on data and to instrument queries to propagate these annotations to produce results annotated with provenance. However, even sophisticated optimizers are often incapable of producing efficient execution plans for instrumented queries, because of their inherent complexity and unusual structure. Thus, while instrumentation enables provenance support for databases without requiring any modification to the DBMS, the performance of this approach is far from optimal. In this work, we develop provenancespecific optimizations to address this problem. Specifically, we introduce algebraic equivalences targeted at instrumented queries and discuss alternative, equivalent ways of instrumenting a query for provenance capture. Furthermore, we present an extensible heuristic and cost-based optimization (CBO) framework that governs the application of these optimizations and implement this framework in our GProM provenance system. Our CBO is agnostic to the plan space shape, uses a DBMS for cost estimation, and enables retrofitting of optimization choices into existing code by adding a few LOC. Our experiments confirm that these optimizations are highly effective, often improving performance by several orders of magnitude for diverse provenance tasks.",
"title": ""
},
{
"docid": "31e25f598fb15964358c482b6c37271f",
"text": "Any bacterial population harbors a small number of phenotypic variants that survive exposure to high concentrations of antibiotic. Importantly, these so-called 'persister cells' compromise successful antibiotic therapy of bacterial infections and are thought to contribute to the development of antibiotic resistance. Intriguingly, drug-tolerant persisters have also been identified as a factor underlying failure of chemotherapy in tumor cell populations. Recent studies have begun to unravel the complex molecular mechanisms underlying persister formation and revolve around stress responses and toxin-antitoxin modules. Additionally, in vitro evolution experiments are revealing insights into the evolutionary and adaptive aspects of this phenotype. Furthermore, ever-improving experimental techniques are stimulating efforts to investigate persisters in their natural, infection-associated, in vivo environment. This review summarizes recent insights into the molecular mechanisms of persister formation, explains how persisters complicate antibiotic treatment of infections, and outlines emerging strategies to combat these tolerant cells.",
"title": ""
},
{
"docid": "85c133eecada3c4bb25f96ad8127eec3",
"text": "With the advent of brain computer interfaces based on real-time fMRI (rtfMRI-BCI), the possibility of performing neurofeedback based on brain hemodynamics has become a reality. In the early stage of the development of this field, studies have focused on the volitional control of activity in circumscribed brain regions. However, based on the understanding that the brain functions by coordinated activity of spatially distributed regions, there have recently been further developments to incorporate real-time feedback of functional connectivity and spatio-temporal patterns of brain activity. The present article reviews the principles of rtfMRI neurofeedback, its applications, benefits and limitations. A special emphasis is given to the discussion of novel developments that have enabled the use of this methodology to achieve self-regulation of the functional connectivity between different brain areas and of distributed brain networks, anticipating new and exciting applications for cognitive neuroscience and for the potential alleviation of neuropsychiatric disorders.",
"title": ""
},
{
"docid": "7ec6147d4549f07c2d8c6c24566faa2f",
"text": "Physical unclonable functions (PUFs) are security features that are based on process variations that occur during silicon chip fabrication. As PUFs are dependent on process variations, they need to be robust against reversible and irreversible temporal variabilities. In this paper, we present experimental results showing temporal variability in 4, 5, and 7-stage ring oscillator PUFs (ROPUFs). The reversible temporal variabilities are studied based on voltage and temperature variations, and the irreversible temporal variabilities are studied based on accelerated aging. Our results show that ROPUFs are sensitive to temperature and voltage variations regardless of the number of RO stages used. It is also observed that the aging, temperature, and voltage variation effects are observed to be uniformly distributed throughout the chip. This is evidenced by noting uniform changes in the RO frequency. Our results also show that most of the bit flips occur when the frequency difference in the RO pairs is low. This leads us to the conclusion that RO comparison pairs that pass high frequency threshold should be filtered to reduce temporal variabilities effect on the ROPUF. The experimental results also show that the 3-stage ROPUF has the lowest percentage of bit flip occurrences and the highest number of RO comparison pairs that pass high frequency threshold.",
"title": ""
},
{
"docid": "439c0a4e0c2171c066a2a8286842f0c2",
"text": "Capacitive rotary encoders are widely used in motor velocity and angular position control, where high-speed and high-precision angle calculation is required. This paper illustrates implementation of arctangent operation, based on the CORDIC (an acronym for COordinate Rotational DIgital Computer) algorithm, in the capacitive rotary encoder signal demodulation in an FPGA to obtain the motor velocity and position. By skipping some unnecessary rotation in CORDIC algorithm, we improve the algorithm's computing accuracy. Experiments show that the residue angle error is almost reduced by half after the CORDIC algorithm is optimized, and is completely meet the precision requirements of the system.",
"title": ""
},
{
"docid": "d88059813c4064ec28c58a8ab23d3030",
"text": "Routing in Vehicular Ad hoc Networks is a challenging task due to the unique characteristics of the network such as high mobility of nodes, dynamically changing topology and highly partitioned network. It is a challenge to ensure reliable, continuous and seamless communication in the presence of speeding vehicles. The performance of routing protocols depends on various internal factors such as mobility of nodes and external factors such as road topology and obstacles that block the signal. This demands a highly adaptive approach to deal with the dynamic scenarios by selecting the best routing and forwarding strategies and by using appropriate mobility and propagation models. In this paper we review the existing routing protocols for VANETs and categorise them into a taxonomy based on key attributes such as network architecture, applications supported, routing strategies, forwarding strategies, mobility models and quality of service metrics. Protocols belonging to unicast, multicast, geocast and broadcast categories are discussed. Strengths and weaknesses of various protocols using topology based, position based and cluster based approaches are analysed. Emphasis is given on the adaptive and context-aware routing protocols. Simulation of broadcast and unicast protocols is carried out and the results are presented.",
"title": ""
},
{
"docid": "196868f85571b16815127d2bd87b98ff",
"text": "Scientists have predicted that carbon’s immediate neighbors on the periodic chart, boron and nitrogen, may also form perfect nanotubes, since the advent of carbon nanotubes (CNTs) in 1991. First proposed then synthesized by researchers at UC Berkeley in the mid 1990’s, the boron nitride nanotube (BNNT) has proven very difficult to make until now. Herein we provide an update on a catalyst-free method for synthesizing highly crystalline, small diameter BNNTs with a high aspect ratio using a high power laser under a high pressure and high temperature environment first discovered jointly by NASA/NIA JSA. Progress in purification methods, dispersion studies, BNNT mat and composite formation, and modeling and diagnostics will also be presented. The white BNNTs offer extraordinary properties including neutron radiation shielding, piezoelectricity, thermal oxidative stability (> 800 ̊C in air), mechanical strength, and toughness. The characteristics of the novel BNNTs and BNNT polymer composites and their potential applications are discussed.",
"title": ""
},
{
"docid": "675007890407b7e8a7d15c1255e77ec6",
"text": "This study investigated the influence of the completeness of CRM relational information processes on customer-based relational performance and profit performance. In addition, interaction orientation and CRM readiness were adopted as moderators on the relationship between CRM relational information processes and customer-based performance. Both qualitative and quantitative approaches were applied in this study. The results revealed that the completeness of CRM relational information processes facilitates customer-based relational performance (i.e., customer satisfaction, and positive WOM), and in turn enhances profit performance (i.e., efficiency with regard to identifying, acquiring and retaining, and converting unprofitable customers to profitable ones). The alternative model demonstrated that both interaction orientation and CRM readiness play a mediating role in the relationship between information processes and relational performance. Managers should strengthen the completeness and smoothness of CRM information processes, should increase the level of interactional orientation with customers and should maintain firm CRM readiness to service their customers. The implications of this research and suggestions for managers were also discussed.",
"title": ""
},
{
"docid": "122bc83bcd27b95092c64cf1ad8ee6a8",
"text": "Plants make the world, a greener and a better place to live in. Although all plants need water to survive, giving them too much or too little can cause them to die. Thus, we need to implement an automatic plant watering system that ensures that the plants are watered at regular intervals, with appropriate amount, whenever they are in need. This paper describes the object oriented design of an IoT based Automated Plant Watering System.",
"title": ""
},
{
"docid": "ad2655aaed8a4f3379cb206c6e405f16",
"text": "Lesions of the orbital frontal lobe, particularly its medial sectors, are known to cause deficits in empathic ability, whereas the role of this region in theory of mind processing is the subject of some controversy. In a functional magnetic resonance imaging study with healthy participants, emotional perspective-taking was contrasted with cognitive perspective-taking in order to examine the role of the orbital frontal lobe in subcomponents of theory of mind processing. Subjects responded to a series of scenarios presented visually in three conditions: emotional perspective-taking, cognitive perspective-taking and a control condition that required inferential reasoning, but not perspective-taking. Group results demonstrated that the medial orbitofrontal lobe, defined as Brodmann's areas 11 and 25, was preferentially involved in emotional as compared to cognitive perspective-taking. This finding is both consistent with the lesion literature, and resolves the inconsistency of orbital frontal findings in the theory of mind literature.",
"title": ""
},
{
"docid": "d60f812bb8036a2220dab8740f6a74c4",
"text": "UNLABELLED\nThe limit of the Colletotrichum gloeosporioides species complex is defined genetically, based on a strongly supported clade within the Colletotrichum ITS gene tree. All taxa accepted within this clade are morphologically more or less typical of the broadly defined C. gloeosporioides, as it has been applied in the literature for the past 50 years. We accept 22 species plus one subspecies within the C. gloeosporioides complex. These include C. asianum, C. cordylinicola, C. fructicola, C. gloeosporioides, C. horii, C. kahawae subsp. kahawae, C. musae, C. nupharicola, C. psidii, C. siamense, C. theobromicola, C. tropicale, and C. xanthorrhoeae, along with the taxa described here as new, C. aenigma, C. aeschynomenes, C. alatae, C. alienum, C. aotearoa, C. clidemiae, C. kahawae subsp. ciggaro, C. salsolae, and C. ti, plus the nom. nov. C. queenslandicum (for C. gloeosporioides var. minus). All of the taxa are defined genetically on the basis of multi-gene phylogenies. Brief morphological descriptions are provided for species where no modern description is available. Many of the species are unable to be reliably distinguished using ITS, the official barcoding gene for fungi. Particularly problematic are a set of species genetically close to C. musae and another set of species genetically close to C. kahawae, referred to here as the Musae clade and the Kahawae clade, respectively. Each clade contains several species that are phylogenetically well supported in multi-gene analyses, but within the clades branch lengths are short because of the small number of phylogenetically informative characters, and in a few cases individual gene trees are incongruent. Some single genes or combinations of genes, such as glyceraldehyde-3-phosphate dehydrogenase and glutamine synthetase, can be used to reliably distinguish most taxa and will need to be developed as secondary barcodes for species level identification, which is important because many of these fungi are of biosecurity significance. In addition to the accepted species, notes are provided for names where a possible close relationship with C. gloeosporioides sensu lato has been suggested in the recent literature, along with all subspecific taxa and formae speciales within C. gloeosporioides and its putative teleomorph Glomerella cingulata.\n\n\nTAXONOMIC NOVELTIES\nName replacement - C. queenslandicum B. Weir & P.R. Johnst. New species - C. aenigma B. Weir & P.R. Johnst., C. aeschynomenes B. Weir & P.R. Johnst., C. alatae B. Weir & P.R. Johnst., C. alienum B. Weir & P.R. Johnst, C. aotearoa B. Weir & P.R. Johnst., C. clidemiae B. Weir & P.R. Johnst., C. salsolae B. Weir & P.R. Johnst., C. ti B. Weir & P.R. Johnst. New subspecies - C. kahawae subsp. ciggaro B. Weir & P.R. Johnst. Typification: Epitypification - C. queenslandicum B. Weir & P.R. Johnst.",
"title": ""
},
{
"docid": "872d1f216a463b354221be8b68d35d96",
"text": "Table 2 – Results of the proposed method for different voting schemes and variants compared to a method from the literature Diet management is a key factor for the prevention and treatment of diet-related chronic diseases. Computer vision systems aim to provide automated food intake assessment using meal images. We propose a method for the recognition of food items in meal images using a deep convolutional neural network (CNN) followed by a voting scheme. Our approach exploits the outstanding descriptive ability of a CNN, while the patch-wise model allows the generation of sufficient training samples, provides additional spatial flexibility for the recognition and ignores background pixels.",
"title": ""
},
{
"docid": "73d58bbe0550fb58efc49ae5f84a1c7b",
"text": "In this study, we will present the novel application of Type-2 (T2) fuzzy control into the popular video game called flappy bird. To the best of our knowledge, our work is the first deployment of the T2 fuzzy control into the computer games research area. We will propose a novel T2 fuzzified flappy bird control system that transforms the obstacle avoidance problem of the game logic into the reference tracking control problem. The presented T2 fuzzy control structure is composed of two important blocks which are the reference generator and Single Input Interval T2 Fuzzy Logic Controller (SIT2-FLC). The reference generator is the mechanism which uses the bird's position and the pipes' positions to generate an appropriate reference signal to be tracked. Thus, a conventional fuzzy feedback control system can be defined. The generated reference signal is tracked via the presented SIT2-FLC that can be easily tuned while also provides a certain degree of robustness to system. We will investigate the performance of the proposed T2 fuzzified flappy bird control system by providing comparative simulation results and also experimental results performed in the game environment. It will be shown that the proposed T2 fuzzified flappy bird control system results with a satisfactory performance both in the framework of fuzzy control and computer games. We believe that this first attempt of the employment of T2-FLCs in games will be an important step for a wider deployment of T2-FLCs in the research area of computer games.",
"title": ""
},
{
"docid": "14dc7c8065adad3fc3c67f5a8e35298b",
"text": "This paper describes a method for maximum power point tracking (MPPT) control while searching for optimal parameters corresponding to weather conditions at that time. The conventional method has problems in that it is impossible to quickly acquire the generation power at the maximum power (MP) point in low solar radiation (irradiation) regions. It is found theoretically and experimentally that the maximum output power and the optimal current, which give this maximum, have a linear relation at a constant temperature. Furthermore, it is also shown that linearity exists between the short-circuit current and the optimal current. MPPT control rules are created based on the findings from solar arrays that can respond at high speeds to variations in irradiation. The proposed MPPT control method sets the output current track on the line that gives the relation between the MP and the optimal current so as to acquire the MP that can be generated at that time by dividing the power and current characteristics into two fields. The method is based on the generated power being a binary function of the output current. Considering the experimental fact that linearity is maintained only at low irradiation below half the maximum irradiation, the proportionality coefficient (voltage coefficient) is compensated for only in regions with more than half the rated optimal current, which correspond to the maximum irradiation. At high irradiation, the voltage coefficient needed to perform the proposed MPPT control is acquired through the hill-climbing method. The effectiveness of the proposed method is verified through experiments under various weather conditions",
"title": ""
},
{
"docid": "9c262b845fff31abd1cbc2932957030d",
"text": "Dixon's method for computing multivariate resultants by simultaneously eliminating many variables is reviewed. The method is found to be quite restrictive because often the Dixon matrix is singular, and the Dixon resultant vanished identically yielding no information about solutions for many algebraic and geometry problems. We extend Dixon's method for the case when the Dixon matrix is singular, but satisfies a condition. An efficient algorithm is developed based on the proposed extension for extracting conditions for the existence of affine solutions of a finite set of polynomials. Using this algorithm, numerous geometric and algebraic identities are derived for examples which appear intractable with other techniques of triangulation such as the successive resultant method, the Gro¨bner basis method, Macaulay resultants and Characteristic set method. Experimental results suggest that the resultant of a set of polynomials which are symmetric in the variables is relatively easier to compute using the extended Dixon's method.",
"title": ""
},
{
"docid": "73f605a48d0494d0007f242cee5c67ff",
"text": "BACKGROUND\nLarge comparative studies that have evaluated long-term functional outcome of operatively treated ankle fractures are lacking. This study was performed to analyse the influence of several combinations of malleolar fractures on long-term functional outcome and development of osteoarthritis.\n\n\nMETHODS\nRetrospective cohort-study on operated (1995-2007) malleolar fractures. Results were assessed with use of the AAOS- and AOFAS-questionnaires, VAS-pain score, dorsiflexion restriction (range of motion) and osteoarthritis. Categorisation was determined using the number of malleoli involved.\n\n\nRESULTS\n243 participants with a mean follow-up of 9.6 years were included. Significant differences for all outcomes were found between unimalleolar (isolated fibular) and bimalleolar (a combination of fibular and medial) fractures (AOFAS 97 vs 91, p = 0.035; AAOS 97 vs 90, p = 0.026; dorsiflexion restriction 2.8° vs 6.7°, p = 0.003). Outcomes after fibular fractures with an additional posterior fragment were similar to isolated fibular fractures. However, significant differences were found between unimalleolar and trimalleolar (a combination of lateral, medial and posterior) fractures (AOFAS 97 vs 88, p < 0.001; AAOS 97 vs 90, p = 0.003; VAS-pain 1.1 vs 2.3 p < 0.001; dorsiflexion restriction 2.9° vs 6.9°, p < 0.001). There was no significant difference in isolated fibular fractures with or without additional deltoid ligament injury. In addition, no functional differences were found between bimalleolar and trimalleolar fractures. Surprisingly, poor outcomes were found for isolated medial malleolar fractures. Development of osteoarthritis occurred mainly in trimalleolar fractures with a posterior fragment larger than 5 %.\n\n\nCONCLUSIONS\nThe results of our study show that long-term functional outcome is strongly associated to medial malleolar fractures, isolated or as part of bi- or trimalleolar fractures. More cases of osteoarthritis are found in trimalleolar fractures.",
"title": ""
},
{
"docid": "0f24b6c36586505c1f4cc001e3ddff13",
"text": "A novel model for asymmetric multiagent reinforcement learning is introduced in this paper. The model addresses the problem where the information states of the agents involved in the learning task are not equal; some agents (leaders) have information how their opponents (followers) will select their actions and based on this information leaders encourage followers to select actions that lead to improved payoffs for the leaders. This kind of configuration arises e.g. in semi-centralized multiagent systems with an external global utility associated to the system. We present a brief literature survey of multiagent reinforcement learning based on Markov games and then propose an asymmetric learning model that utilizes the theory of Markov games. Additionally, we construct a practical learning method based on the proposed learning model and study its convergence properties. Finally, we test our model with a simple example problem and a larger two-layer pricing application.",
"title": ""
}
] |
scidocsrr
|
5dd0919efe858a49f7b21cd278e925ec
|
A hybrid approach to fake news detection on social media
|
[
{
"docid": "35b025c508a568e0d73d9113c7cdb5e2",
"text": "Satire is an attractive subject in deception detection research: it is a type of deception that intentionally incorporates cues revealing its own deceptiveness. Whereas other types of fabrications aim to instill a false sense of truth in the reader, a successful satirical hoax must eventually be exposed as a jest. This paper provides a conceptual overview of satire and humor, elaborating and illustrating the unique features of satirical news, which mimics the format and style of journalistic reporting. Satirical news stories were carefully matched and examined in contrast with their legitimate news counterparts in 12 contemporary news topics in 4 domains (civics, science, business, and “soft” news). Building on previous work in satire detection, we proposed an SVMbased algorithm, enriched with 5 predictive features (Absurdity, Humor, Grammar, Negative Affect, and Punctuation) and tested their combinations on 360 news articles. Our best predicting feature combination (Absurdity, Grammar and Punctuation) detects satirical news with a 90% precision and 84% recall (F-score=87%). Our work in algorithmically identifying satirical news pieces can aid in minimizing the potential deceptive impact of satire.",
"title": ""
}
] |
[
{
"docid": "f9eed4f99d70c51dc626a61724540d3c",
"text": "A soft-start circuit with soft-recovery function for DC-DC converters is presented in this paper. The soft-start strategy is based on a linearly ramped-up reference and an error amplifier with minimum selector implemented with a three-limb differential pair skillfully. The soft-recovery strategy is based on a compact clamp circuit. The ramp voltage would be clamped once the feedback voltage is detected lower than a threshold, which could control the output to be recovered slowly and linearly. A monolithic DC-DC buck converter with proposed circuit has been fabricated with a 0.5μm CMOS process for validation. The measurement result shows that the ramp-based soft-start and soft-recovery circuit have good performance and agree well with the theoretical analysis.",
"title": ""
},
{
"docid": "b93888e6c47d6f2dab8e1b2d0fec8b14",
"text": "We developed and evaluated a multimodal affect detector that combines conversational cues, gross body language, and facial features. The multimodal affect detector uses feature-level fusion to combine the sensory channels and linear discriminant analyses to discriminate between naturally occurring experiences of boredom, engagement/flow, confusion, frustration, delight, and neutral. Training and validation data for the affect detector were collected in a study where 28 learners completed a 32- min. tutorial session with AutoTutor, an intelligent tutoring system with conversational dialogue. Classification results supported a channel × judgment type interaction, where the face was the most diagnostic channel for spontaneous affect judgments (i.e., at any time in the tutorial session), while conversational cues were superior for fixed judgments (i.e., every 20 s in the session). The analyses also indicated that the accuracy of the multichannel model (face, dialogue, and posture) was statistically higher than the best single-channel model for the fixed but not spontaneous affect expressions. However, multichannel models reduced the discrepancy (i.e., variance in the precision of the different emotions) of the discriminant models for both judgment types. The results also indicated that the combination of channels yielded superadditive effects for some affective states, but additive, redundant, and inhibitory effects for others. We explore the structure of the multimodal linear discriminant models and discuss the implications of some of our major findings.",
"title": ""
},
{
"docid": "ef4ea0d489cd9cb7ea4734d96b379721",
"text": "A huge repository of terabytes of data is generated each day from modern information systems and digital technologies such as Internet of Things and cloud computing. Analysis of these massive data requires a lot of efforts at multiple levels to extract knowledge for decision making. Therefore, big data analysis is a current area of research and development. The basic objective of this paper is to explore the potential impact of big data challenges, open research issues, and various tools associated with it. As a result, this article provides a platform to explore big data at numerous stages. Additionally, it opens a new horizon for researchers to develop the solution, based on the challenges and open research issues. Keywords—Big data analytics; Hadoop; Massive data; Structured data; Unstructured Data",
"title": ""
},
{
"docid": "9be0134d63fbe6978d786280fb133793",
"text": "Yann Le Cun AT&T Bell Labs Holmdel NJ 07733 We introduce a new approach for on-line recognition of handwritten words written in unconstrained mixed style. The preprocessor performs a word-level normalization by fitting a model of the word structure using the EM algorithm. Words are then coded into low resolution \"annotated images\" where each pixel contains information about trajectory direction and curvature. The recognizer is a convolution network which can be spatially replicated. From the network output, a hidden Markov model produces word scores. The entire system is globally trained to minimize word-level errors.",
"title": ""
},
{
"docid": "de333f099bad8a29046453e099f91b84",
"text": "Financial time-series forecasting has long been a challenging problem because of the inherently noisy and stochastic nature of the market. In the high-frequency trading, forecasting for trading purposes is even a more challenging task, since an automated inference system is required to be both accurate and fast. In this paper, we propose a neural network layer architecture that incorporates the idea of bilinear projection as well as an attention mechanism that enables the layer to detect and focus on crucial temporal information. The resulting network is highly interpretable, given its ability to highlight the importance and contribution of each temporal instance, thus allowing further analysis on the time instances of interest. Our experiments in a large-scale limit order book data set show that a two-hidden-layer network utilizing our proposed layer outperforms by a large margin all existing state-of-the-art results coming from much deeper architectures while requiring far fewer computations.",
"title": ""
},
{
"docid": "f1b1dc51cf7a6d8cb3b644931724cad6",
"text": "OBJECTIVE\nTo evaluate the curing profile of bulk-fill resin-based composites (RBC) using micro-Raman spectroscopy (μRaman).\n\n\nMETHODS\nFour bulk-fill RBCs were compared to a conventional RBC. RBC blocks were light-cured using a polywave LED light-curing unit. The 24-h degree of conversion (DC) was mapped along a longitudinal cross-section using μRaman. Curing profiles were constructed and 'effective' (>90% of maximum DC) curing parameters were calculated. A statistical linear mixed effects model was constructed to analyze the relative effect of the different curing parameters.\n\n\nRESULTS\nCuring efficiency differed widely with the flowable bulk-fill RBCs presenting a significantly larger 'effective' curing area than the fibre-reinforced RBC, which on its turn revealed a significantly larger 'effective' curing area than the full-depth bulk-fill and conventional (control) RBC. A decrease in 'effective' curing depth within the light beam was found in the same order. Only the flowable bulk-fill RBCs were able to cure 'effectively' at a 4-mm depth for the whole specimen width (up to 4mm outside the light beam). All curing parameters were found to statistically influence the statistical model and thus the curing profile, except for the beam inhomogeneity (regarding the position of the 410-nm versus that of 470-nm LEDs) that did not significantly affect the model for all RBCs tested.\n\n\nCONCLUSIONS\nMost of the bulk-fill RBCs could be cured up to at least a 4-mm depth, thereby validating the respective manufacturer's recommendations.\n\n\nCLINICAL SIGNIFICANCE\nAccording to the curing profiles, the orientation and position of the light guide is less critical for the bulk-fill RBCs than for the conventional RBC.",
"title": ""
},
{
"docid": "bf3b8f8a609aa210b87c735743af3f31",
"text": "It has been pointed out that about 30% of the traffic congestion is caused by vehicles cruising around their destination and looking for a place to park. Therefore, addressing the problems associated with parking in crowded urban areas is of great significance. One effective solution is providing guidance for the vehicles to be parked according to the occupancy status of each parking lot. However, the existing parking guidance schemes mainly rely on deploying sensors or RSUs in the parking lot, which would incur substantial capital overhead. To reduce the aforementioned cost, we propose IPARK, which taps into the unused resources (e.g., wireless device, rechargeable battery, and storage capability) offered by parked vehicles to perform parking guidance. In IPARK, the cluster formed by parked vehicles generates the parking lot map automatically, monitors the occupancy status of each parking space in real time, and provides assistance for vehicles searching for parking spaces. We propose an efficient architecture for IPARK and investigate the challenging issues in realizing parking guidance over this architecture. Finally, we investigate IPARK through realistic experiments and simulation. The numerical results obtained verify that our scheme achieves effective parking guidance in VANETs.",
"title": ""
},
{
"docid": "e7bdf6d9a718127b5b9a94fed8afc0a5",
"text": "BACKGROUND\nUse of the Internet for health information continues to grow rapidly, but its impact on health care is unclear. Concerns include whether patients' access to large volumes of information will improve their health; whether the variable quality of the information will have a deleterious effect; the effect on health disparities; and whether the physician-patient relationship will be improved as patients become more equal partners, or be damaged if physicians have difficulty adjusting to a new role.\n\n\nMETHODS\nTelephone survey of nationally representative sample of the American public, with oversample of people in poor health.\n\n\nRESULTS\nOf the 3209 respondents, 31% had looked for health information on the Internet in the past 12 months, 16% had found health information relevant to themselves and 8% had taken information from the Internet to their physician. Looking for information on the Internet showed a strong digital divide; however, once information had been looked for, socioeconomic factors did not predict other outcomes. Most (71%) people who took information to the physician wanted the physician's opinion, rather than a specific intervention. The effect of taking information to the physician on the physician-patient relationship was likely to be positive as long as the physician had adequate communication skills, and did not appear challenged by the patient bringing in information.\n\n\nCONCLUSIONS\nFor health information on the Internet to achieve its potential as a force for equity and patient well-being, actions are required to overcome the digital divide; assist the public in developing searching and appraisal skills; and ensure physicians have adequate communication skills.",
"title": ""
},
{
"docid": "f471ac7a453dee8820f0cf6be7b8b657",
"text": "The spatial responses of many of the cells recorded in all layers of rodent medial entorhinal cortex (mEC) show mutually aligned grid patterns. Recent experimental findings have shown that grids can often be better described as elliptical rather than purely circular and that, beyond the mutual alignment of their grid axes, ellipses tend to also orient their long axis along preferred directions. Are grid alignment and ellipse orientation aspects of the same phenomenon? Does the grid alignment result from single-unit mechanisms or does it require network interactions? We address these issues by refining a single-unit adaptation model of grid formation, to describe specifically the spontaneous emergence of conjunctive grid-by-head-direction cells in layers III, V, and VI of mEC. We find that tight alignment can be produced by recurrent collateral interactions, but this requires head-direction (HD) modulation. Through a competitive learning process driven by spatial inputs, grid fields then form already aligned, and with randomly distributed spatial phases. In addition, we find that the self-organization process is influenced by any anisotropy in the behavior of the simulated rat. The common grid alignment often orients along preferred running directions (RDs), as induced in a square environment. When speed anisotropy is present in exploration behavior, the shape of individual grids is distorted toward an ellipsoid arrangement. Speed anisotropy orients the long ellipse axis along the fast direction. Speed anisotropy on its own also tends to align grids, even without collaterals, but the alignment is seen to be loose. Finally, the alignment of spatial grid fields in multiple environments shows that the network expresses the same set of grid fields across environments, modulo a coherent rotation and translation. Thus, an efficient metric encoding of space may emerge through spontaneous pattern formation at the single-unit level, but it is coherent, hence context-invariant, if aided by collateral interactions.",
"title": ""
},
{
"docid": "ab70c8814c0e15695c8142ce8aad69bc",
"text": "Domain-oriented dialogue systems are often faced with users that try to cross the limits of their knowledge, by unawareness of its domain limitations or simply to test their capacity. These interactions are considered to be Out-Of-Domain and several strategies can be found in the literature to deal with some specific situations. Since if a certain input appears once, it has a non-zero probability of being entered later, the idea of taking advantage of real human interactions to feed these dialogue systems emerges, thus, naturally. In this paper, we introduce the SubTle Corpus, a corpus of Interaction-Response pairs extracted from subtitles files, created to help dialogue systems to deal with Out-of-Domain interactions.",
"title": ""
},
{
"docid": "1c4e4f0ffeae8b03746ca7de184989ef",
"text": "Applications written in low-level languages without type or memory safety are prone to memory corruption. Attackers gain code execution capabilities through memory corruption despite all currently deployed defenses. Control-Flow Integrity (CFI) is a promising security property that restricts indirect control-flow transfers to a static set of well-known locations. We present Lockdown, a modular, fine-grained CFI policy that protects binary-only applications and libraries without requiring sourcecode. Lockdown adaptively discovers the control-flow graph of a running process based on the executed code. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks using information from a trusted dynamic loader. A shadow stack enforces precise integrity for function returns. Our prototype implementation shows that Lockdown results in low performance overhead and a security analysis discusses any remaining gadgets.",
"title": ""
},
{
"docid": "eb91cfce08dd5e17599c18e3b291db3e",
"text": "The extraordinary electronic properties of graphene provided the main thrusts for the rapid advance of graphene electronics. In photonics, the gate-controllable electronic properties of graphene provide a route to efficiently manipulate the interaction of photons with graphene, which has recently sparked keen interest in graphene plasmonics. However, the electro-optic tuning capability of unpatterned graphene alone is still not strong enough for practical optoelectronic applications owing to its non-resonant Drude-like behaviour. Here, we demonstrate that substantial gate-induced persistent switching and linear modulation of terahertz waves can be achieved in a two-dimensional metamaterial, into which an atomically thin, gated two-dimensional graphene layer is integrated. The gate-controllable light-matter interaction in the graphene layer can be greatly enhanced by the strong resonances of the metamaterial. Although the thickness of the embedded single-layer graphene is more than six orders of magnitude smaller than the wavelength (<λ/1,000,000), the one-atom-thick layer, in conjunction with the metamaterial, can modulate both the amplitude of the transmitted wave by up to 47% and its phase by 32.2° at room temperature. More interestingly, the gate-controlled active graphene metamaterials show hysteretic behaviour in the transmission of terahertz waves, which is indicative of persistent photonic memory effects.",
"title": ""
},
{
"docid": "904efe77c6a31867cf096770b99e856b",
"text": "Deep neural networks have been proven powerful at processing perceptual data, such as images and audio. However for tabular data, tree-based models are more popular. A nice property of tree-based models is their natural interpretability. In this work, we present Deep Neural Decision Trees (DNDT) – tree models realised by neural networks. A DNDT is intrinsically interpretable, as it is a tree. Yet as it is also a neural network (NN), it can be easily implemented in NN toolkits, and trained with gradient descent rather than greedy splitting. We evaluate DNDT on several tabular datasets, verify its efficacy, and investigate similarities and differences between DNDT and vanilla decision trees. Interestingly, DNDT self-prunes at both split and feature-level.",
"title": ""
},
{
"docid": "0a2cba5e6d5b6b467e34e79ee099f509",
"text": "Wearable devices are used in various applications to collect information including step information, sleeping cycles, workout statistics, and health-related information. Due to the nature and richness of the data collected by such devices, it is important to ensure the security of the collected data. This paper presents a new lightweight authentication scheme suitable for wearable device deployment. The scheme allows a user to mutually authenticate his/her wearable device(s) and the mobile terminal (e.g., Android and iOS device) and establish a session key among these devices (worn and carried by the same user) for secure communication between the wearable device and the mobile terminal. The security of the proposed scheme is then demonstrated through the broadly accepted real-or-random model, as well as using the popular formal security verification tool, known as the Automated validation of Internet security protocols and applications. Finally, we present a comparative summary of the proposed scheme in terms of the overheads such as computation and communication costs, security and functionality features of the proposed scheme and related schemes, and also the evaluation findings from the NS2 simulation.",
"title": ""
},
{
"docid": "185ae8a2c89584385a810071c6003c15",
"text": "In this paper, we propose a free viewpoint image rendering method combined with filter based alpha matting for improving the image quality of image boundaries. When we synthesize a free viewpoint image, blur around object boundaries in an input image spills foreground/background color in the synthesized image. To generate smooth boundaries, alpha matting is a solution. In our method based on filtering, we make a boundary map from input images and depth maps, and then feather the map by using guided filter. In addition, we extend view synthesis method to deal the alpha channel. Experiment results show that the proposed method synthesizes 0.4 dB higher quality images than the conventional method without the matting. Also the proposed method synthesizes 0.2 dB higher quality images than the conventional method of robust matting. In addition, the computational cost of the proposed method is 100x faster than the conventional matting.",
"title": ""
},
{
"docid": "ad79b709607e468156060d9ac543638b",
"text": "I. Introduction UNTIL 35 yr ago, most scientists did not take research on the pineal gland seriously. The decade beginning in 1956, however, provided several discoveries that laid the foundation for what has become a very active area of investigation. These important early observations included the findings that, 1), the physiological activity of the pineal is influenced by the photoperiodic environment (1–5); 2), the gland contains a substance, N-acetyl-5-methoxytryptamine or melatonin, which has obvious endocrine capabilities (6, 7); 3), the function of the reproductive system in photoperiodically dependent rodents is inextricably linked to the physiology of the pineal gland (5, 8, 9); 4), the sympathetic innervation to the pineal is required for the gland to maintain its biosynthetic and endocrine activities (10, 11); and 5), the pineal gland can be rapidly removed from rodents with minimal damage to adjacent neural structures using a specially designed trephine (12).",
"title": ""
},
{
"docid": "849e8f3932c7db5643901f3909306292",
"text": "PHP is one of the most popular languages for server-side application development. The language is highly dynamic, providing programmers with a large amount of flexibility. However, these dynamic features also have a cost, making it difficult to apply traditional static analysis techniques used in standard code analysis and transformation tools. As part of our work on creating analysis tools for PHP, we have conducted a study over a significant corpus of open-source PHP systems, looking at the sizes of actual PHP programs, which features of PHP are actually used, how often dynamic features appear, and how distributed these features are across the files that make up a PHP website. We have also looked at whether uses of these dynamic features are truly dynamic or are, in some cases, statically understandable, allowing us to identify specific patterns of use which can then be taken into account to build more precise tools. We believe this work will be of interest to creators of analysis tools for PHP, and that the methodology we present can be leveraged for other dynamic languages with similar features.",
"title": ""
},
{
"docid": "6712ec987ec783dceae442993be0cd6e",
"text": "In this paper, we have conducted a literature review on the recent developments and publications involving the vehicle routing problem and its variants, namely vehicle routing problem with time windows (VRPTW) and the capacitated vehicle routing problem (CVRP) and also their variants. The VRP is classified as an NP-hard problem. Hence, the use of exact optimization methods may be difficult to solve these problems in acceptable CPU times, when the problem involves real-world data sets that are very large. The vehicle routing problem comes under combinatorial problem. Hence, to get solutions in determining routes which are realistic and very close to the optimal solution, we use heuristics and meta-heuristics. In this paper we discuss the various exact methods and the heuristics and meta-heuristics used to solve the VRP and its variants.",
"title": ""
},
{
"docid": "939593fce7d86ae8ccd6a67532fd78b0",
"text": "In patients with spinal cord injury, the primary or mechanical trauma seldom causes total transection, even though the functional loss may be complete. In addition, biochemical and pathological changes in the cord may worsen after injury. To explain these phenomena, the concept of the secondary injury has evolved for which numerous pathophysiological mechanisms have been postulated. This paper reviews the concept of secondary injury with special emphasis on vascular mechanisms. Evidence is presented to support the theory of secondary injury and the hypothesis that a key mechanism is posttraumatic ischemia with resultant infarction of the spinal cord. Evidence for the role of vascular mechanisms has been obtained from a variety of models of acute spinal cord injury in several species. Many different angiographic methods have been used for assessing microcirculation of the cord and for measuring spinal cord blood flow after trauma. With these techniques, the major systemic and local vascular effects of acute spinal cord injury have been identified and implicated in the etiology of secondary injury. The systemic effects of acute spinal cord injury include hypotension and reduced cardiac output. The local effects include loss of autoregulation in the injured segment of the spinal cord and a marked reduction of the microcirculation in both gray and white matter, especially in hemorrhagic regions and in adjacent zones. The microcirculatory loss extends for a considerable distance proximal and distal to the site of injury. Many studies have shown a dose-dependent reduction of spinal cord blood flow varying with the severity of injury, and a reduction of spinal cord blood flow which worsens with time after injury. The functional deficits due to acute spinal cord injury have been measured electrophysiologically with techniques such as motor and somatosensory evoked potentials and have been found proportional to the degree of posttraumatic ischemia. The histological effects include early hemorrhagic necrosis leading to major infarction at the injury site. These posttraumatic vascular effects can be treated. Systemic normotension can be restored with volume expansion or vasopressors, and spinal cord blood flow can be improved with dopamine, steroids, nimodipine, or volume expansion. The combination of nimodipine and volume expansion improves posttraumatic spinal cord blood flow and spinal cord function measured by evoked potentials. These results provide strong evidence that posttraumatic ischemia is an important secondary mechanism of injury, and that it can be counteracted.",
"title": ""
},
{
"docid": "d61fde7b9c5a2c17c7cafb44aa51fe9b",
"text": "868 NOTICES OF THE AMS VOLUME 47, NUMBER 8 In April 2000 the National Council of Teachers of Mathematics (NCTM) released Principles and Standards for School Mathematics—the culmination of a multifaceted, three-year effort to update NCTM’s earlier standards documents and to set forth goals and recommendations for mathematics education in the prekindergarten-through-grade-twelve years. As the chair of the Writing Group, I had the privilege to interact with all aspects of the development and review of this document and with the committed groups of people, including the members of the Writing Group, who contributed immeasurably to this process. This article provides some background about NCTM and the standards, the process of development, efforts to gather input and feedback, and ways in which feedback from the mathematics community influenced the document. The article concludes with a section that provides some suggestions for mathematicians who are interested in using Principles and Standards.",
"title": ""
}
] |
scidocsrr
|
5b0c90036b1630088a81e43c06577626
|
Approaches for teaching computational thinking strategies in an educational game: A position paper
|
[
{
"docid": "b64a91ca7cdeb3dfbe5678eee8962aa7",
"text": "Computational thinking is gaining recognition as an important skill set for students, both in computer science and other disciplines. Although there has been much focus on this field in recent years, it is rarely taught as a formal course within the curriculum, and there is little consensus on what exactly computational thinking entails and how to teach and evaluate it. To address these concerns, we have developed a computational thinking framework to be used as a planning and evaluative tool. Within this framework, we aim to unify the differing opinions about what computational thinking should involve. As a case study, we have applied the framework to Light-Bot, an educational game with a strong focus on programming, and found that the framework provides us with insight into the usefulness of the game to reinforce computer science concepts.",
"title": ""
}
] |
[
{
"docid": "ba11acc4f3e981893b2844ec16286962",
"text": "Continual Learning in artificial neural networks suffers from interference and forgetting when different tasks are learned sequentially. This paper introduces the Active Long Term Memory Networks (A-LTM), a model of sequential multitask deep learning that is able to maintain previously learned association between sensory input and behavioral output while acquiring knew knowledge. A-LTM exploits the non-convex nature of deep neural networks and actively maintains knowledge of previously learned, inactive tasks using a distillation loss [1]. Distortions of the learned input-output map are penalized but hidden layers are free to transverse towards new local optima that are more favorable for the multi-task objective. We re-frame the McClelland’s seminal Hippocampal theory [2] with respect to Catastrophic Inference (CI) behavior exhibited by modern deep architectures trained with back-propagation and inhomogeneous sampling of latent factors across epochs. We present empirical results of non-trivial CI during continual learning in Deep Linear Networks trained on the same task, in Convolutional Neural Networks when the task shifts from predicting semantic to graphical factors and during domain adaptation from simple to complex environments. We present results of the A-LTM model’s ability to maintain viewpoint recognition learned in the highly controlled iLab-20M [3] dataset with 10 object categories and 88 camera viewpoints, while adapting to the unstructured domain of Imagenet [4] with 1,000 object categories.",
"title": ""
},
{
"docid": "6554f662f667b8b53ad7b75abfa6f36f",
"text": "present paper introduces an innovative approach to automatically grade the disease on plant leaves. The system effectively inculcates Information and Communication Technology (ICT) in agriculture and hence contributes to Precision Agriculture. Presently, plant pathologists mainly rely on naked eye prediction and a disease scoring scale to grade the disease. This manual grading is not only time consuming but also not feasible. Hence the current paper proposes an image processing based approach to automatically grade the disease spread on plant leaves by employing Fuzzy Logic. The results are proved to be accurate and satisfactory in contrast with manual grading. Keywordscolor image segmentation, disease spot extraction, percent-infection, fuzzy logic, disease grade. INTRODUCTION The sole area that serves the food needs of the entire human race is the Agriculture sector. It has played a key role in the development of human civilization. Plants exist everywhere we live, as well as places without us. Plant disease is one of the crucial causes that reduces quantity and degrades quality of the agricultural products. Plant Pathology is the scientific study of plant diseases caused by pathogens (infectious diseases) and environmental conditions (physiological factors). It involves the study of pathogen identification, disease etiology, disease cycles, economic impact, plant disease epidemiology, plant disease resistance, pathosystem genetics and management of plant diseases. Disease is impairment to the normal state of the plant that modifies or interrupts its vital functions such as photosynthesis, transpiration, pollination, fertilization, germination etc. Plant diseases have turned into a nightmare as it can cause significant reduction in both quality and quantity of agricultural products [2]. Information and Communication Technology (ICT) application is going to be implemented as a solution in improving the status of the agriculture sector [3]. Due to the manifestation and developments in the fields of sensor networks, robotics, GPS technology, communication systems etc, precision agriculture started emerging [10]. The objectives of precision agriculture are profit maximization, agricultural input rationalization and environmental damage reduction by adjusting the agricultural practices to the site demands. In the area of disease management, grade of the disease is determined to provide an accurate and precision treatment advisory. EXISTING SYSTEM: MANUAL GRADING Presently the plant pathologists mainly rely on the naked eye prediction and a disease scoring scale to grade the disease on leaves. There are some problems associated with this manual grading. Diseases are inevitable in plants. When a plant gets affected by the disease, a treatment advisory is required to cure the Arun Kumar R et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1709-1716 IJCTA | SEPT-OCT 2011 Available [email protected] 1709 ISSN:2229-6093",
"title": ""
},
{
"docid": "ed71b6b9cc2feca022ad641b6e0ca458",
"text": "This chapter surveys recent developments in the area of multimedia big data, the biggest big data. One core problem is how to best process this multimedia big data in an efficient and scalable way. We outline examples of the use of the MapReduce framework, including Hadoop, which has become the most common approach to a truly scalable and efficient framework for common multimedia processing tasks, e.g., content analysis and retrieval. We also examine recent developments on deep learning which has produced promising results in large-scale multimedia processing and retrieval. Overall the focus has been on empirical studies rather than the theoretical so as to highlight the most practically successful recent developments and highlight the associated caveats or lessons learned.",
"title": ""
},
{
"docid": "88b89521775ba2d8570944a54e516d0f",
"text": "The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the “physiological envelope” during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.",
"title": ""
},
{
"docid": "fcc03d4ec2b1073d76b288abc76c92cb",
"text": "Deep Learning is arguably the most rapidly evolving research area in recent years. As a result it is not surprising that the design of state-of-the-art deep neural net models proceeds without much consideration of the latest hardware targets, and the design of neural net accelerators proceeds without much consideration of the characteristics of the latest deep neural net models. Nevertheless, in this paper we show that there are significant improvements available if deep neural net models and neural net accelerators are co-designed. This paper is trimmed to 6 pages to meet the conference requirement. A longer version with more detailed discussion will be released afterwards.",
"title": ""
},
{
"docid": "6c175d7a90ed74ab3b115977c82b0ffa",
"text": "We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of connections follow power laws that indicate a scale-free pattern of connectivity, with most nodes having relatively few connections joined together through a small number of hubs with many connections. These regularities have also been found in certain other complex natural networks, such as the World Wide Web, but they are not consistent with many conventional models of semantic organization, based on inheritance hierarchies, arbitrarily structured networks, or high-dimensional vector spaces. We propose that these structures reflect the mechanisms by which semantic networks grow. We describe a simple model for semantic growth, in which each new word or concept is connected to an existing network by differentiating the connectivity pattern of an existing node. This model generates appropriate small-world statistics and power-law connectivity distributions, and it also suggests one possible mechanistic basis for the effects of learning history variables (age of acquisition, usage frequency) on behavioral performance in semantic processing tasks.",
"title": ""
},
{
"docid": "09fa4dd4fbebc2295d42944cfa6d3a6f",
"text": "Blockchain has proven to be successful in decision making using the streaming live data in various applications, it is the latest form of Information Technology. There are two broad Blockchain categories, public and private. Public Blockchains are very transparent as the data is distributed and can be accessed by anyone within the distributed system. Private Blockchains are restricted and therefore data transfer can only take place in the constrained environment. Using private Blockchains in maintaining private records for managed history or governing regulations can be very effective due to the data and records, or logs being made with respect to particular user or application. The Blockchain system can also gather data records together and transfer them as secure data records to a third party who can then take further actions. In this paper, an automotive road safety case study is reviewed to demonstrate the feasibility of using private Blockchains in the automotive industry. Within this case study anomalies occur when a driver ignores the traffic rules. The Blockchain system itself monitors and logs the behavior of a driver using map layers, geo data, and external rules obtained from the local governing body. As the information is logged the driver’s privacy information is not shared and so it is both accurate and a secure system. Additionally private Blockchains are small systems therefore they are easy to maintain and faster when compared to distributed (public) Blockchains.",
"title": ""
},
{
"docid": "49148d621dcda718ec5ca761d3485240",
"text": "Understanding and modifying the effects of arbitrary illumination on human faces in a realistic manner is a challenging problem both for face synthesis and recognition. Recent research demonstrates that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace using spherical harmonics representation. Morphable models are statistical ensembles of facial properties such as shape and texture. In this paper, we integrate spherical harmonics into the morphable model framework, by proposing a 3D spherical harmonic basis morphable model (SHBMM) and demonstrate that any face under arbitrary unknown lighting can be simply represented by three low-dimensional vectors: shape parameters, spherical harmonic basis parameters and illumination coefficients. We show that, with our SHBMM, given one single image under arbitrary unknown lighting, we can remove the illumination effects from the image (face \"delighting\") and synthesize new images under different illumination conditions (face \"re-lighting\"). Furthermore, we demonstrate that cast shadows can be detected and subsequently removed by using the image error between the input image and the corresponding rendered image. We also propose two illumination invariant face recognition methods based on the recovered SHBMM parameters and the de-lit images respectively. Experimental results show that using only a single image of a face under unknown lighting, we can achieve high recognition rates and generate photorealistic images of the face under a wide range of illumination conditions, including multiple sources of illumination.",
"title": ""
},
{
"docid": "6ed5198b9b0364f41675b938ec86456f",
"text": "Artificial intelligence (AI) will have many profound societal effects It promises potential benefits (and may also pose risks) in education, defense, business, law, and science In this article we explore how AI is likely to affect employment and the distribution of income. We argue that AI will indeed reduce drastically the need fol human toil We also note that some people fear the automation of work hy machines and the resulting unemployment Yet, since the majority of us probably would rather use our time for activities other than our present jobs, we ought thus to greet the work-eliminating consequences of AI enthusiastically The paper discusses two reasons, one economic and one psychological, for this paradoxical apprehension We conclude with a discussion of problems of moving toward the kind of economy that will he enahled by developments in AI ARTIFICIAL INTELLIGENCE [Al] and other developments in computer science are giving birth to a dramatically different class of machinesPmachines that can perform tasks requiring reasoning, judgment, and perception that previously could be done only by humans. Will these I am grateful for the helpful comments provided by many people Specifically I would like to acknowledge the advice teceived from Sandra Cook and Victor Walling of SRI, Wassily Leontief and Faye Duchin of the New York University Institute for Economic Analysis, Margaret Boden of The University of Sussex, Henry Levin and Charles Holloway of Stanford University, James Albus of the National Bureau of Standards, and Peter Hart of Syntelligence Herbert Simon, of CarnegieMellon Univetsity, wrote me extensive criticisms and rebuttals of my arguments Robert Solow of MIT was quite skeptical of my premises, but conceded nevertheless that my conclusions could possibly follow from them if certain other economic conditions were satisfied. Save1 Kliachko of SRI improved my composition and also referred me to a prescient article by Keynes (Keynes, 1933) who, a half-century ago, predicted an end to toil within one hundred years machines reduce the need for human toil and thus cause unemployment? There are two opposing views in response to this question Some claim that AI is not really very different from other technologies that have supported automation and increased productivitytechnologies such as mechanical engineering, ele&onics, control engineering, and operations rcsearch. Like them, AI may also lead ultimately to an expanding economy with a concomitant expansion of employment opportunities. At worst, according to this view, thcrc will be some, perhaps even substantial shifts in the types of jobs, but certainly no overall reduction in the total number of jobs. In my opinion, however, such an out,come is based on an overly conservative appraisal of the real potential of artificial intelligence. Others accept a rather strong hypothesis with regard to AI-one that sets AI far apart from previous labor-saving technologies. Quite simply, this hypothesis affirms that anything people can do, AI can do as well. Cert,ainly AI has not yet achieved human-level performance in many important functions, but many AI scientists believe that artificial intelligence inevitably will equal and surpass human mental abilities-if not in twenty years, then surely in fifty. The main conclusion of this view of AI is that, even if AI does create more work, this work can also be performed by AI devices without necessarily implying more jobs for humans Of course, the mcrc fact that some work can be performed automatically does not make it inevitable that it, will be. Automation depends on many factorsPeconomic, political, and social. The major economic parameter would seem to be the relative cost of having either people or machines execute a given task (at a specified rate and level of quality) In THE AI MAGAZINE Summer 1984 5 AI Magazine Volume 5 Number 2 (1984) (© AAAI)",
"title": ""
},
{
"docid": "bc726085dace24ccdf33a2cd58ab8016",
"text": "The output of high-level synthesis typically consists of a netlist of generic RTL components and a state sequencing table. While module generators and logic synthesis tools can be used to map RTL components into standard cells or layout geometries, they cannot provide technology mapping into the data book libraries of functional RTL cells used commonly throughout the industrial design community. In this paper, we introduce an approach to implementing generic RTL components with technology-specific RTL library cells. This approach addresses the criticism of designers who feel that high-level synthesis tools should be used in conjunction with existing RTL data books. We describe how GENUS, a library of generic RTL components, is organized for use in high-level synthesis and how DTAS, a functional synthesis system, is used to map GENUS components into RTL library cells.",
"title": ""
},
{
"docid": "56cc387384839b3f45a06f42b6b74a5f",
"text": "This paper presents a likelihood-based methodology for a probabilistic representation of a stochastic quantity for which only sparse point data and/or interval data may be available. The likelihood function is evaluated from the probability density function (PDF) for sparse point data and the cumulative distribution function for interval data. The full likelihood function is used in this paper to calculate the entire PDF of the distribution parameters. The uncertainty in the distribution parameters is integrated to calculate a single PDF for the quantity of interest. The approach is then extended to non-parametric PDFs, wherein the entire distribution can be discretized at a finite number of points and the probability density values at these points can be inferred using the principle of maximum likelihood, thus avoiding the assumption of any particular distribution. The proposed approach is demonstrated with challenge problems from the Sandia Epistemic Uncertainty Workshop and the results are compared with those of previous studies that pursued different approaches to represent and propagate interval description of",
"title": ""
},
{
"docid": "a8dbb16b9a0de0dcae7780ffe4c0b7cf",
"text": "Increased demands on implementation of wireless sensor networks in automation praxis result in relatively new wireless standard – ZigBee. The new workplace was established on the Department of Electronics and Multimedia Communications (DEMC) in order to keep up with ZigBee modern trend. This paper presents the first results and experiences associated with ZigBee based wireless sensor networking. The accent was put on suitable chipset platform selection for Home Automation wireless network purposes. Four popular microcontrollers was selected to investigate memory requirements and power consumption such as ARM, x51, HCS08, and Coldfire. Next objective was to test interoperability between various manufacturers’ platforms, what is important feature of ZigBee standard. A simple network based on ZigBee physical layer as well as ZigBee compliant network were made to confirm the basic ZigBee interoperability.",
"title": ""
},
{
"docid": "564675e793834758bd66e440b65be206",
"text": "While it is still most common for information visualization researchers to develop new visualizations from a data-or taskdriven perspective, there is growing interest in understanding the types of visualizations people create by themselves for personal use. As part of this recent direction, we have studied a large collection of whiteboards in a research institution, where people make active use of combinations of words, diagrams and various types of visuals to help them further their thought processes. Our goal is to arrive at a better understanding of the nature of visuals that are created spontaneously during brainstorming, thinking, communicating, and general problem solving on whiteboards. We use the qualitative approaches of open coding, interviewing, and affinity diagramming to explore the use of recognizable and novel visuals, and the interplay between visualization and diagrammatic elements with words, numbers and labels. We discuss the potential implications of our findings on information visualization design.",
"title": ""
},
{
"docid": "07e2dae7b1ed0c7164e59bd31b0d3f87",
"text": "The requirement to perform complicated statistic analysis of big data by institutions of engineering, scientific research, health care, commerce, banking and computer research is immense. However, the limitations of the widely used current desktop software like R, excel, minitab and spss gives a researcher limitation to deal with big data. The big data analytic tools like IBM Big Insight, Revolution Analytics, and tableau software are commercial and heavily license. Still, to deal with big data, client has to invest in infrastructure, installation and maintenance of hadoop cluster to deploy these analytical tools. Apache Hadoop is an open source distributed computing framework that uses commodity hardware. With this project, I intend to collaborate Apache Hadoop and R software over the on the Cloud. Objective is to build a SaaS (Software-as-a-Service) analytic platform that stores & analyzes big data using open source Apache Hadoop and open source R software. The benefits of this cloud based big data analytical service are user friendliness & cost as it is developed using open-source software. The system is cloud based so users have their own space in cloud where user can store there data. User can browse data, files, folders using browser and arrange datasets. User can select dataset and analyze required dataset and store result back to cloud storage. Enterprise with a cloud environment can save cost of hardware, upgrading software, maintenance or network configuration, thus it making it more economical.",
"title": ""
},
{
"docid": "e13d935c4950323a589dce7fd5bce067",
"text": "Worker reliability is a longstanding issue in crowdsourcing, and the automatic discovery of high quality workers is an important practical problem. Most previous work on this problem mainly focuses on estimating the quality of each individual worker jointly with the true answer of each task. However, in practice, for some tasks, worker quality could be associated with some explicit characteristics of the worker, such as education level, major and age. So the following question arises: how do we automatically discover related worker attributes for a given task, and further utilize the findings to improve data quality? In this paper, we propose a general crowd targeting framework that can automatically discover, for a given task, if any group of workers based on their attributes have higher quality on average; and target such groups, if they exist, for future work on the same task. Our crowd targeting framework is complementary to traditional worker quality estimation approaches. Furthermore, an advantage of our framework is that it is more budget efficient because we are able to target potentially good workers before they actually do the task. Experiments on real datasets show that the accuracy of final prediction can be improved significantly for the same budget (or even less budget in some cases). Our framework can be applied to many real word tasks and can be easily integrated in current crowdsourcing platforms.",
"title": ""
},
{
"docid": "b3c81ac4411c2461dcec7be210ce809c",
"text": "The rapid proliferation of the Internet and the cost-effective growth of its key enabling technologies are revolutionizing information technology and creating unprecedented opportunities for developing largescale distributed applications. At the same time, there is a growing concern over the security of Web-based applications, which are rapidly being deployed over the Internet [4]. For example, e-commerce—the leading Web-based application—is projected to have a market exceeding $1 trillion over the next several years. However, this application has already become a security nightmare for both customers and business enterprises as indicated by the recent episodes involving unauthorized access to credit card information. Other leading Web-based applications with considerable information security and privacy issues include telemedicine-based health-care services and online services or businesses involving both public and private sectors. Many of these applications are supported by workflow management systems (WFMSs) [1]. A large number of public and private enterprises are in the forefront of adopting Internetbased WFMSs and finding ways to improve their services and decision-making processes, hence we are faced with the daunting challenge of ensuring the security and privacy of information in such Web-based applications [4]. Typically, a Web-based application can be represented as a three-tier architecture, depicted in the figure, which includes a Web client, network servers, and a back-end information system supported by a suite of databases. For transaction-oriented applications, such as e-commerce, middleware is usually provided between the network servers and back-end systems to ensure proper interoperability. Considerable security challenges and vulnerabilities exist within each component of this architecture. Existing public-key infrastructures (PKIs) provide encryption mechanisms for ensuring information confidentiality, as well as digital signature techniques for authentication, data integrity and non-repudiation [11]. As no access authorization services are provided in this approach, it has a rather limited scope for Web-based applications. The strong need for information security on the Internet is attributable to several factors, including the massive interconnection of heterogeneous and distributed systems, the availability of high volumes of sensitive information at the end systems maintained by corporations and government agencies, easy distribution of automated malicious software by malfeasors, the ease with which computer crimes can be committed anonymously from across geographic boundaries, and the lack of forensic evidence in computer crimes, which makes the detection and prosecution of criminals extremely difficult. Two classes of services are crucial for a secure Internet infrastructure. These include access control services and communication security services. Access James B.D. Joshi,",
"title": ""
},
{
"docid": "a1b3289280bab5a58ef3b23632e01f5b",
"text": "Current devices have limited battery life, typically lasting less than one day. This can lead to situations where critical tasks, such as making an emergency phone call, are not possible. Other devices, supporting different functionality, may have sufficient battery life to enable this task. We present PowerShake; an exploration of power as a shareable commodity between mobile (and wearable) devices. PowerShake enables users to control the balance of power levels in their own devices (intra-personal transactions) and to trade power with others (inter-personal transactions) according to their ongoing usage requirements. This paper demonstrates Wireless Power Transfer (WPT) between mobile devices. PowerShake is: simple to perform on-the-go; supports ongoing/continuous tasks (transferring at ~3.1W); fits in a small form factor; and is compliant with electromagnetic safety guidelines while providing charging efficiency similar to other standards (48.2% vs. 51.2% in Qi). Based on our proposed technical implementation, we run a series of workshops to derive candidate designs for PowerShake enabled devices and interactions, and to bring to light the social implications of power as a tradable asset.",
"title": ""
},
{
"docid": "3e26fe227e8c270fda4fe0b7d09b2985",
"text": "With the recent emergence of mobile platforms capable of executing increasingly complex software and the rising ubiquity of using mobile platforms in sensitive applications such as banking, there is a rising danger associated with malware targeted at mobile devices. The problem of detecting such malware presents unique challenges due to the limited resources avalible and limited privileges granted to the user, but also presents unique opportunity in the required metadata attached to each application. In this article, we present a machine learning-based system for the detection of malware on Android devices. Our system extracts a number of features and trains a One-Class Support Vector Machine in an offline (off-device) manner, in order to leverage the higher computing power of a server or cluster of servers.",
"title": ""
},
{
"docid": "4fd0808988829c4d477c01b22be0c98f",
"text": "Victor Hugo suggested the possibility that patterns created by the movement of grains of sand are in no small part responsible for the shape and feel of the natural world in which we live. No one can seriously doubt that granular materials, of which sand is but one example, are ubiquitous in our daily lives. They play an important role in many of our industries, such as mining, agriculture, and construction. They clearly are also important for geological processes where landslides, erosion, and, on a related but much larger scale, plate tectonics determine much of the morphology of Earth. Practically everything that we eat started out in a granular form, and all the clutter on our desks is often so close to the angle of repose that a chance perturbation will create an avalanche onto the floor. Moreover, Hugo hinted at the extreme sensitivity of the macroscopic world to the precise motion or packing of the individual grains. We may nevertheless think that he has overstepped the bounds of common sense when he related the creation of worlds to the movement of simple grains of sand. By the end of this article, we hope to have shown such an enormous richness and complexity to granular motion that Hugo’s metaphor might no longer appear farfetched and could have a literal meaning: what happens to a pile of sand on a table top is relevant to processes taking place on an astrophysical scale. Granular materials are simple: they are large conglomerations of discrete macroscopic particles. If they are noncohesive, then the forces between them are only repulsive so that the shape of the material is determined by external boundaries and gravity. If the grains are dry, any interstitial fluid, such as air, can often be neglected in determining many, but not all, of the flow and static properties of the system. Yet despite this seeming simplicity, a granular material behaves differently from any of the other familiar forms of matter—solids, liquids, or gases—and should therefore be considered an additional state of matter in its own right. In this article, we shall examine in turn the unusual behavior that granular material displays when it is considered to be a solid, liquid, or gas. For example, a sand pile at rest with a slope lower than the angle of repose, as in Fig. 1(a), behaves like a solid: the material remains at rest even though gravitational forces create macroscopic stresses on its surface. If the pile is tilted several degrees above the angle of repose, grains start to flow, as seen in Fig. 1(b). However, this flow is clearly not that of an ordinary fluid because it only exists in a boundary layer at the pile’s surface with no movement in the bulk at all. (Slurries, where grains are mixed with a liquid, have a phenomenology equally complex as the dry powders we shall describe in this article.) There are two particularly important aspects that contribute to the unique properties of granular materials: ordinary temperature plays no role, and the interactions between grains are dissipative because of static friction and the inelasticity of collisions. We might at first be tempted to view any granular flow as that of a dense gas since gases, too, consist of discrete particles with negligible cohesive forces between them. In contrast to ordinary gases, however, the energy scale kBT is insignificant here. The relevant energy scale is the potential energy mgd of a grain of mass m raised by its own diameter d in the Earth’s gravity g . For typical sand, this energy is at least 1012 times kBT at room temperature. Because kBT is irrelevant, ordinary thermodynamic arguments become useless. For example, many studies have shown (Williams, 1976; Rosato et al., 1987; Fan et al., 1990; Jullien et al., 1992; Duran et al., 1993; Knight et al., 1993; Savage, 1993; Zik et al., 1994; Hill and Kakalios, 1994; Metcalfe et al., 1995) that vibrations or rotations of a granular material will induce particles of different sizes to separate into different regions of the container. Since there are no attractive forces between",
"title": ""
},
{
"docid": "da61b8bd6c1951b109399629f47dad16",
"text": "In this paper, we introduce an approach for distributed nonlinear control of multiple hovercraft-type underactuated vehicles with bounded and unidirectional inputs. First, a bounded nonlinear controller is given for stabilization and tracking of a single vehicle, using a cascade backstepping method. Then, this controller is combined with a distributed gradient-based control for multi-vehicle formation stabilization using formation potential functions previously constructed. The vehicles are used in the Caltech Multi-Vehicle Wireless Testbed (MVWT). We provide simulation and experimental results for stabilization and tracking of a single vehicle, and a simulation of stabilization of a six-vehicle formation, demonstrating that in all cases the control bounds and the control objective are satisfied.",
"title": ""
}
] |
scidocsrr
|
7f402e7cee7ba14153df8b80962f7347
|
Airwriting: Hands-Free Mobile Text Input by Spotting and Continuous Recognition of 3d-Space Handwriting with Inertial Sensors
|
[
{
"docid": "fa440af1d9ec65caf3cd37981919b56e",
"text": "We present a method for spotting sporadically occurring gestures in a continuous data stream from body-worn inertial sensors. Our method is based on a natural partitioning of continuous sensor signals and uses a two-stage approach for the spotting task. In a first stage, signal sections likely to contain specific motion events are preselected using a simple similarity search. Those preselected sections are then further classified in a second stage, exploiting the recognition capabilities of hidden Markov models. Based on two case studies, we discuss implementation details of our approach and show that it is a feasible strategy for the spotting of various types of motion events. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c26db11bfb98e1fcb32ef7a01adadd1c",
"text": "Until recently the weight and size of inertial sensors has prohibited their use in domains such as human motion capture. Recent improvements in the performance of small and lightweight micromachined electromechanical systems (MEMS) inertial sensors have made the application of inertial techniques to such problems possible. This has resulted in an increased interest in the topic of inertial navigation, however current introductions to the subject fail to sufficiently describe the error characteristics of inertial systems. We introduce inertial navigation, focusing on strapdown systems based on MEMS devices. A combination of measurement and simulation is used to explore the error characteristics of such systems. For a simple inertial navigation system (INS) based on the Xsens Mtx inertial measurement unit (IMU), we show that the average error in position grows to over 150 m after 60 seconds of operation. The propagation of orientation errors caused by noise perturbing gyroscope signals is identified as the critical cause of such drift. By simulation we examine the significance of individual noise processes perturbing the gyroscope signals, identifying white noise as the process which contributes most to the overall drift of the system. Sensor fusion and domain specific constraints can be used to reduce drift in INSs. For an example INS we show that sensor fusion using magnetometers can reduce the average error in position obtained by the system after 60 seconds from over 150 m to around 5 m. We conclude that whilst MEMS IMU technology is rapidly improving, it is not yet possible to build a MEMS based INS which gives sub-meter position accuracy for more than one minute of operation.",
"title": ""
}
] |
[
{
"docid": "3d0dddc16ae56d6952dd1026476fcbcc",
"text": "We introduce a collective action model of institutional innovation. This model, based on converging perspectives from the technology innovation management and social movements literature, views institutional change as a dialectical process in which partisan actors espousing conflicting views confront each other and engage in political behaviors to create and change institutions. The model represents an important complement to existing models of institutional change. We discuss how these models together account for various stages and cycles of institutional change.",
"title": ""
},
{
"docid": "ebc77c29a8f761edb5e4ca588b2e6fb5",
"text": "Gigantomastia by definition means bilateral benign progressive breast enlargement to a degree that requires breast reduction surgery to remove more than 1800 g of tissue on each side. It is seen at puberty or during pregnancy. The etiology for this condition is still not clear, but surgery remains the mainstay of treatment. We present a unique case of Gigantomastia, which was neither related to puberty nor pregnancy and has undergone three operations so far for recurrence.",
"title": ""
},
{
"docid": "ab50f458d919ba3ac3548205418eea62",
"text": "Department of Microbiology, School of Life Sciences, Bharathidasan University, Tiruchirappali 620 024, Tamilnadu, India. Department of Medical Biotechnology, Sri Ramachandra University, Porur, Chennai 600 116, Tamilnadu, India. CAS Marine Biology, Annamalai University, Parangipettai 608 502, Tamilnadu, India. Department of Zoology, DDE, Annamalai University, Annamalai Nagar 608 002, Tamilnadu, India Asian Pacific Journal of Tropical Disease (2012)S291-S295",
"title": ""
},
{
"docid": "229c701c28a0398045756170aff7788e",
"text": "This paper presents Point Convolutional Neural Networks (PCNN): a novel framework for applying convolutional neural networks to point clouds. The framework consists of two operators: extension and restriction, mapping point cloud functions to volumetric functions and vise-versa. A point cloud convolution is defined by pull-back of the Euclidean volumetric convolution via an extension-restriction mechanism.\n The point cloud convolution is computationally efficient, invariant to the order of points in the point cloud, robust to different samplings and varying densities, and translation invariant, that is the same convolution kernel is used at all points. PCNN generalizes image CNNs and allows readily adapting their architectures to the point cloud setting.\n Evaluation of PCNN on three central point cloud learning benchmarks convincingly outperform competing point cloud learning methods, and the vast majority of methods working with more informative shape representations such as surfaces and/or normals.",
"title": ""
},
{
"docid": "f6a1d7b206ca2796d4e91f3e8aceeed8",
"text": "Objective To develop a classifier that tackles the problem of determining the risk of a patient of suffering from a cardiovascular disease within the next ten years. The system has to provide both a diagnosis and an interpretable model explaining the decision. In this way, doctors are able to analyse the usefulness of the information given by the system. Methods Linguistic fuzzy rule-based classification systems are used, since they provide a good classification rate and a highly interpretable model. More specifically, a new methodology to combine fuzzy rule-based classification systems with interval-valued fuzzy sets is proposed, which is composed of three steps: 1) the modelling of the linguistic labels of the classifier using interval-valued fuzzy sets; 2) the use of theKα operator in the inference process and 3) the application of a genetic tuning to find the best ignorance degree that each interval-valued fuzzy set represents as well as the best value for the parameter α of theKα operator in each rule. Results Correspondingauthor. Tel:+34-948166048. Fax:+34-948168924 Email addresses: [email protected] (Jośe Antonio Sanz ), [email protected] (Mikel Galar),[email protected] (Aranzazu Jurio), [email protected] (Antonio Brugos), [email protected] (Miguel Pagola),[email protected] (Humberto Bustince) Preprint submitted to Elsevier November 13, 2013 © 2013. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/",
"title": ""
},
{
"docid": "c040df6f014e52b5fe76234bb4f277b3",
"text": "CRISPR–Cas systems provide microbes with adaptive immunity by employing short DNA sequences, termed spacers, that guide Cas proteins to cleave foreign DNA. Class 2 CRISPR–Cas systems are streamlined versions, in which a single RNA-bound Cas protein recognizes and cleaves target sequences. The programmable nature of these minimal systems has enabled researchers to repurpose them into a versatile technology that is broadly revolutionizing biological and clinical research. However, current CRISPR–Cas technologies are based solely on systems from isolated bacteria, leaving the vast majority of enzymes from organisms that have not been cultured untapped. Metagenomics, the sequencing of DNA extracted directly from natural microbial communities, provides access to the genetic material of a huge array of uncultivated organisms. Here, using genome-resolved metagenomics, we identify a number of CRISPR–Cas systems, including the first reported Cas9 in the archaeal domain of life, to our knowledge. This divergent Cas9 protein was found in little-studied nanoarchaea as part of an active CRISPR–Cas system. In bacteria, we discovered two previously unknown systems, CRISPR–CasX and CRISPR–CasY, which are among the most compact systems yet discovered. Notably, all required functional components were identified by metagenomics, enabling validation of robust in vivo RNA-guided DNA interference activity in Escherichia coli. Interrogation of environmental microbial communities combined with in vivo experiments allows us to access an unprecedented diversity of genomes, the content of which will expand the repertoire of microbe-based biotechnologies.",
"title": ""
},
{
"docid": "693d9ee4f286ef03175cb302ef1b2a93",
"text": "We explore the question of whether phase-based time-of-flight (TOF) range cameras can be used for looking around corners and through scattering diffusers. By connecting TOF measurements with theory from array signal processing, we conclude that performance depends on two primary factors: camera modulation frequency and the width of the specular lobe (“shininess”) of the wall. For purely Lambertian walls, commodity TOF sensors achieve resolution on the order of meters between targets. For seemingly diffuse walls, such as posterboard, the resolution is drastically reduced, to the order of 10cm. In particular, we find that the relationship between reflectance and resolution is nonlinear—a slight amount of shininess can lead to a dramatic improvement in resolution. Since many realistic scenes exhibit a slight amount of shininess, we believe that off-the-shelf TOF cameras can look around corners.",
"title": ""
},
{
"docid": "895d960fef8dd79cd42a1648b29380d8",
"text": "E-learning systems have gained nowadays a large student community due to the facility of use and the integration of one-to-one service. Indeed, the personalization of the learning process for every user is needed to increase the student satisfaction and learning efficiency. Nevertheless, the number of students who give up their learning process cannot be neglected. Therefore, it is mandatory to establish an efficient way to assess the level of personalization in such systems. In fact, assessing represents the evolution’s key in every personalized application and especially for the e-learning systems. Besides, when the e-learning system can decipher the student personality, the student learning process will be stabilized, and the dropout rate will be decreased. In this context, we propose to evaluate the personalization process in an e-learning platform using an intelligent referential system based on agents. It evaluates any recommendation made by the e-learning platform based on a comparison. We compare the personalized service of the e-learning system and those provided by our referential system. Therefore, our purpose consists in increasing the efficiency of the proposed system to obtain a significant assessment result; precisely, the aim is to improve the outcomes of every algorithm used in each defined agent. This paper deals with the intelligent agent ‘Mod-Knowledge’ responsible for analyzing the student interaction to trace the student knowledge state. The originality of this agent is that it treats the external and the internal student interactions using machine learning algorithms to obtain a complete view of the student knowledge state. The validation of this contribution is done with experiments showing that the proposed algorithms outperform the existing ones.",
"title": ""
},
{
"docid": "d688eb5e3ef9f161ef6593a406db39ee",
"text": "Counting codes makes qualitative content analysis a controversial approach to analyzing textual data. Several decades ago, mainstream content analysis rejected qualitative content analysis on the grounds that it was not sufficiently quantitative; today, it is often charged with not being sufficiently qualitative. This article argues that qualitative content analysis is distinctively qualitative in both its approach to coding and its interpretations of counts from codes. Rather than argue over whether to do qualitative content analysis, researchers must make informed decisions about when to use it in analyzing qualitative data.",
"title": ""
},
{
"docid": "64793728ef4adb16ea2f922b99d7df78",
"text": "Distributed data is ubiquitous in modern information driven applications. With multiple sources of data, the natural challenge is to determine how to collaborate effectively across proprietary organizational boundaries while maximizing the utility of collected information. Since using only local data gives suboptimal utility, techniques for privacy-preserving collaborative knowledge discovery must be developed. Existing cryptography-based work for privacy-preserving data mining is still too slow to be effective for large scale data sets to face today's big data challenge. Previous work on random decision trees (RDT) shows that it is possible to generate equivalent and accurate models with much smaller cost. We exploit the fact that RDTs can naturally fit into a parallel and fully distributed architecture, and develop protocols to implement privacy-preserving RDTs that enable general and efficient distributed privacy-preserving knowledge discovery.",
"title": ""
},
{
"docid": "a7c3eda27ff129915a59bde0f56069cf",
"text": "Recent proliferation of Unmanned Aerial Vehicles (UAVs) into the commercial space has been accompanied by a similar growth in aerial imagery . While useful in many applications, the utility of this visual data is limited in comparison with the total range of desired airborne missions. In this work, we extract depth of field information from monocular images from UAV on-board cameras using a single frame of data per-mapping. Several methods have been previously used with varying degrees of success for similar spatial inferencing tasks, however we sought to take a different approach by framing this as an augmented style-transfer problem. In this work, we sought to adapt two of the state-of-theart style transfer methods to the problem of depth mapping. The first method adapted was based on the unsupervised Pix2Pix approach. The second was developed using a cyclic generative adversarial network (cycle GAN). In addition to these two approaches, we also implemented a baseline algorithm previously used for depth map extraction on indoor scenes, the multi-scale deep network. Using the insights gained from these implementations, we then developed a new methodology to overcome the shortcomings observed that was inspired by recent work in perceptual feature-based style transfer. These networks were trained on matched UAV perspective visual image, depth-map pairs generated using Microsoft’s AirSim high-fidelity UAV simulation engine and environment. The performance of each network was tested using a reserved test set at the end of training and the effectiveness evaluated using against three metrics. While our new network was not able to outperform any of the other approaches but cycle GANs, we believe that the intuition behind the approach was demonstrated to be valid and that it may be successfully refined with future work.",
"title": ""
},
{
"docid": "72462eb4de6f73b765a2717c64af2ce8",
"text": "In this paper, we focus on designing an online credit card fraud detection framework with big data technologies, by which we want to achieve three major goals: 1) the ability to fuse multiple detection models to improve accuracy, 2) the ability to process large amount of data and 3) the ability to do the detection in real time. To accomplish that, we propose a general workflow, which satisfies most design ideas of current credit card fraud detection systems. We further implement the workflow with a new framework which consists of four layers: distributed storage layer, batch training layer, key-value sharing layer and streaming detection layer. With the four layers, we are able to support massive trading data storage, fast detection model training, quick model data sharing and real-time online fraud detection, respectively. We implement it with latest big data technologies like Hadoop, Spark, Storm, HBase, etc. A prototype is implemented and tested with a synthetic dataset, which shows great potentials of achieving the above goals.",
"title": ""
},
{
"docid": "be8864d6fb098c8a008bfeea02d4921a",
"text": "Active testing has recently been introduced to effectively test concurrent programs. Active testing works in two phases. It first uses predictive off-the-shelf static or dynamic program analyses to identify potential concurrency bugs, such as data races, deadlocks, and atomicity violations. In the second phase, active testing uses the reports from these predictive analyses to explicitly control the underlying scheduler of the concurrent program to accurately and quickly discover real concurrency bugs, if any, with very high probability and little overhead. In this paper, we present an extensible framework for active testing of Java programs. The framework currently implements three active testers based on data races, atomic blocks, and deadlocks.",
"title": ""
},
{
"docid": "a3386199b44e3164fafe8a8ae096b130",
"text": "Diehl Aerospace GmbH (DAs) is currently involved in national German Research & Technology (R&T) projects (e.g. SYSTAVIO, SESAM) and in European R&T projects like ASHLEY to extend and to improve the Integrated Modular Avionics (IMA) technology. Diehl Aerospace is investing to expand its current IMA technology to enable further integration of systems including hardware modules, associated software, tools and processes while increasing the level of standardization. An additional objective is to integrate more systems on a common computing platform which uses the same toolchain, processes and integration experiences. New IMA components enable integration of high integrity fast loop system applications such as control applications. Distributed architectures which provide new types of interfaces allow integration of secondary power distribution systems along with other IMA functions. Cross A/C type usage is also a future emphasis to increase standardization and decrease development and operating costs as well as improvements on time to market and affordability of systems.",
"title": ""
},
{
"docid": "c6f3d4b2a379f452054f4220f4488309",
"text": "3D Morphable Models (3DMMs) are powerful statistical models of 3D facial shape and texture, and among the state-of-the-art methods for reconstructing facial shape from single images. With the advent of new 3D sensors, many 3D facial datasets have been collected containing both neutral as well as expressive faces. However, all datasets are captured under controlled conditions. Thus, even though powerful 3D facial shape models can be learnt from such data, it is difficult to build statistical texture models that are sufficient to reconstruct faces captured in unconstrained conditions (in-the-wild). In this paper, we propose the first, to the best of our knowledge, in-the-wild 3DMM by combining a powerful statistical model of facial shape, which describes both identity and expression, with an in-the-wild texture model. We show that the employment of such an in-the-wild texture model greatly simplifies the fitting procedure, because there is no need to optimise with regards to the illumination parameters. Furthermore, we propose a new fast algorithm for fitting the 3DMM in arbitrary images. Finally, we have captured the first 3D facial database with relatively unconstrained conditions and report quantitative evaluations with state-of-the-art performance. Complementary qualitative reconstruction results are demonstrated on standard in-the-wild facial databases.",
"title": ""
},
{
"docid": "7f29ac5d70e8b06de6830a8171e98f23",
"text": "Falls are responsible for considerable morbidity, immobility, and mortality among older persons, especially those living in nursing homes. Falls have many different causes, and several risk factors that predispose patients to falls have been identified. To prevent falls, a systematic therapeutic approach to residents who have fallen is necessary, and close attention must be paid to identifying and reducing risk factors for falls among frail older persons who have not yet fallen. We review the problem of falls in the nursing home, focusing on identifiable causes, risk factors, and preventive approaches. Epidemiology Both the incidence of falls in older adults and the severity of complications increase steadily with age and increased physical disability. Accidents are the fifth leading cause of death in older adults, and falls constitute two thirds of these accidental deaths. About three fourths of deaths caused by falls in the United States occur in the 13% of the population aged 65 years and older [1, 2]. Approximately one third of older adults living at home will fall each year, and about 5% will sustain a fracture or require hospitalization. The incidence of falls and fall-related injuries among persons living in institutions has been reported in numerous epidemiologic studies [3-18]. These data are presented in Table 1. The mean fall incidence calculated from these studies is about three times the rate for community-living elderly persons (mean, 1.5 falls/bed per year), caused both by the more frail nature of persons living in institutions and by more accurate reporting of falls in institutions. Table 1. Incidence of Falls and Fall-Related Injuries in Long-Term Care Facilities* As shown in Table 1, only about 4% of falls (range, 1% to 10%) result in fractures, whereas other serious injuries such as head trauma, soft-tissue injuries, and severe lacerations occur in about 11% of falls (range, 1% to 36%). However, once injured, an elderly person who has fallen has a much higher case fatality rate than does a younger person who has fallen [1, 2]. Each year, about 1800 fatal falls occur in nursing homes. Among persons 85 years and older, 1 of 5 fatal falls occurs in a nursing home [19]. Nursing home residents also have a disproportionately high incidence of hip fracture and have been shown to have higher mortality rates after hip fracture than community-living elderly persons [20]. Furthermore, because of the high frequency of recurrent falls in nursing homes, the likelihood of sustaining an injurious fall is substantial. In addition to injuries, falls can have serious consequences for physical functioning and quality of life. Loss of function can result from both fracture-related disability and self-imposed functional limitations caused by fear of falling and the postfall anxiety syndrome. Decreased confidence in the ability to ambulate safely can lead to further functional decline, depression, feelings of helplessness, and social isolation. In addition, the use of physical or chemical restraints by institutional staff to prevent high-risk persons from falling also has negative effects on functioning. Causes of Falls The major reported immediate causes of falls and their relative frequencies as described in four detailed studies of nursing home populations [14, 15, 17, 21] are presented in Table 2. The Table also contains a comparison column of causes of falls among elderly persons not living in institutions as summarized from seven detailed studies [21-28]. The distribution of causes clearly differs among the populations studied. Frail, high-risk persons living in institutions tend to have a higher incidence of falls caused by gait disorders, weakness, dizziness, and confusion, whereas the falls of community-living persons are more related to their environment. Table 2. Comparison of Causes of Falls in Nursing Home and Community-Living Populations: Summary of Studies That Carefully Evaluated Elderly Persons after a Fall and Specified a Most Likely Cause In the nursing home, weakness and gait problems were the most common causes of falls, accounting for about a quarter of reported cases. Studies have reported that the prevalence of detectable lower-extremity weakness ranges from 48% among community-living older persons [29] to 57% among residents of an intermediate-care facility [30] to more than 80% of residents of a skilled nursing facility [27]. Gait disorders affect 20% to 50% of elderly persons [31], and nearly three quarters of nursing home residents require assistance with ambulation or cannot ambulate [32]. Investigators of casecontrol studies in nursing homes have reported that more than two thirds of persons who have fallen have substantial gait disorders, a prevalence 2.4 to 4.8 times higher than the prevalence among persons who have not fallen [27, 30]. The cause of muscle weakness and gait problems is multifactorial. Aging introduces physical changes that affect strength and gait. On average, healthy older persons score 20% to 40% lower on strength tests than young adults [33], and, among chronically ill nursing home residents, strength is considerably less than that. Much of the weakness seen in the nursing home stems from deconditioning due to prolonged bedrest or limited physical activity and chronic debilitating medical conditions such as heart failure, stroke, or pulmonary disease. Aging is also associated with other deteriorations that impair gait, including increased postural sway; decreased gait velocity, stride length, and step height; prolonged reaction time; and decreased visual acuity and depth perception. Gait problems can also stem from dysfunction of the nervous, musculoskeletal, circulatory, or respiratory systems, as well as from simple deconditioning after a period of inactivity. Dizziness is commonly reported by elderly persons who have fallen and was the attributed cause in 25% of reported nursing home falls. This symptom is often difficult to evaluate because dizziness means different things to different people and has diverse causes. True vertigo, a sensation of rotational movement, may indicate a disorder of the vestibular apparatus such as benign positional vertigo, acute labyrinthitis, or Meniere disease. Symptoms described as imbalance on walking often reflect a gait disorder. Many residents describe a vague light-headedness that may reflect cardiovascular problems, hyperventilation, orthostatic hypotension, drug side effect, anxiety, or depression. Accidents, or falls stemming from environmental hazards, are a major cause of reported falls16% of nursing home falls and 41% of community falls. However, the circumstances of accidents are difficult to verify, and many falls in this category may actually stem from interactions between environmental hazards or hazardous activities and increased individual susceptibility to hazards because of aging and disease. Among impaired residents, even normal activities of daily living might be considered hazardous if they are done without assistance or modification. Factors such as decreased lower-extremity strength, poor posture control, and decreased step height all interact to impair the ability to avoid a fall after an unexpected trip or while reaching or bending. Age-associated impairments of vision, hearing, and memory also tend to increase the number of trips. Studies have shown that most falls in nursing homes occurred during transferring from a bed, chair, or wheelchair [3, 11]. Attempting to move to or from the bathroom and nocturia (which necessitates frequent trips to the bathroom) have also been reported to be associated with falls [34, 35] and fall-related fractures [9]. Environmental hazards that frequently contribute to these falls include wet floors caused by episodes of incontinence, poor lighting, bedrails, and improper bed height. Falls have also been reported to increase when nurse staffing is low, such as during breaks and at shift changes [4, 7, 9, 13], presumably because of lack of staff supervision. Confusion and cognitive impairment are frequently cited causes of falls and may reflect an underlying systemic or metabolic process (for example, electrolyte imbalance or fever). Dementia can increase the number of falls by impairing judgment, visual-spatial perception, and ability to orient oneself geographically. Falls also occur when residents with dementia wander, attempt to get out of wheelchairs, or climb over bed siderails. Orthostatic (postural) hypotension, usually defined as a decrease of 20 mm or more of systolic blood pressure after standing, has a 5% to 25% prevalence among normal elderly persons living at home [36]. It is even more common among persons with certain predisposing risk factors, including autonomic dysfunction, hypovolemia, low cardiac output, parkinsonism, metabolic and endocrine disorders, and medications (particularly sedatives, antihypertensives, vasodilators, and antidepressants) [37]. The orthostatic drop may be more pronounced on arising in the morning because the baroreflex response is diminished after prolonged recumbency, as it is after meals and after ingestion of nitroglycerin [38, 39]. Yet, despite its high prevalence, orthostatic hypotension infrequently causes falls, particularly outside of institutions. This is perhaps because of its transient nature, which makes it difficult to detect after the fall, or because most persons with orthostatic hypotension feel light-headed and will deliberately find a seat rather than fall. Drop attacks are defined as sudden falls without loss of consciousness and without dizziness, often precipitated by a sudden change in head position. This syndrome has been attributed to transient vertebrobasilar insufficiency, although it is probably caused by more diverse pathophysiologic mechanisms. Although early descriptions of geriatric falls identified drop attacks as a substantial cause, more recent studies have reported a smaller proportion of perso",
"title": ""
},
{
"docid": "920c1b2b4720586b1eb90b08631d9e6f",
"text": "Linear active-power-only power flow approximations are pervasive in the planning and control of power systems. However, AC power systems are governed by a system of nonlinear non-convex power flow equations. Existing linear approximations fail to capture key power flow variables including reactive power and voltage magnitudes, both of which are necessary in many applications that require voltage management and AC power flow feasibility. This paper proposes novel linear-programming models (the LPAC models) that incorporate reactive power and voltage magnitudes in a linear power flow approximation. The LPAC models are built on a polyhedral relaxation of the cosine terms in the AC equations, as well as Taylor approximations of the remaining nonlinear terms. Experimental comparisons with AC solutions on a variety of standard IEEE and Matpower benchmarks show that the LPAC models produce accurate values for active and reactive power, phase angles, and voltage magnitudes. The potential benefits of the LPAC models are illustrated on two “proof-of-concept” studies in power restoration and capacitor placement.",
"title": ""
},
{
"docid": "281a9d0c9ad186c1aabde8c56c41cefa",
"text": "Hardware manipulations pose a serious threat to numerous systems, ranging from a myriad of smart-X devices to military systems. In many attack scenarios an adversary merely has access to the low-level, potentially obfuscated gate-level netlist. In general, the attacker possesses minimal information and faces the costly and time-consuming task of reverse engineering the design to identify security-critical circuitry, followed by the insertion of a meaningful hardware Trojan. These challenges have been considered only in passing by the research community. The contribution of this work is threefold: First, we present HAL, a comprehensive reverse engineering and manipulation framework for gate-level netlists. HAL allows automating defensive design analysis (e.g., including arbitrary Trojan detection algorithms with minimal effort) as well as offensive reverse engineering and targeted logic insertion. Second, we present a novel static analysis Trojan detection technique ANGEL which considerably reduces the false-positive detection rate of the detection technique FANCI. Furthermore, we demonstrate that ANGEL is capable of automatically detecting Trojans obfuscated with DeTrust. Third, we demonstrate how a malicious party can semi-automatically inject hardware Trojans into third-party designs. We present reverse engineering algorithms to disarm and trick cryptographic self-tests, and subtly leak cryptographic keys without any a priori knowledge of the design’s internal workings.",
"title": ""
}
] |
scidocsrr
|
e571f60f5dbf8dae0b1d31a80d1584fa
|
A high efficiency low cost direct battery balancing circuit using a multi-winding transformer with reduced switch count
|
[
{
"docid": "90c3543eca7a689188725e610e106ce9",
"text": "Lithium-based battery technology offers performance advantages over traditional battery technologies at the cost of increased monitoring and controls overhead. Multiple-cell Lead-Acid battery packs can be equalized by a controlled overcharge, eliminating the need to periodically adjust individual cells to match the rest of the pack. Lithium-based based batteries cannot be equalized by an overcharge, so alternative methods are required. This paper discusses several cell-balancing methodologies. Active cell balancing methods remove charge from one or more high cells and deliver the charge to one or more low cells. Dissipative techniques find the high cells in the pack, and remove excess energy through a resistive element until their charges match the low cells. This paper presents the theory of charge balancing techniques and the advantages and disadvantages of the presented methods. INTRODUCTION Lithium Ion and Lithium Polymer battery chemistries cannot be overcharged without damaging active materials [1-5]. The electrolyte breakdown voltage is precariously close to the fully charged terminal voltage, typically in the range of 4.1 to 4.3 volts/cell. Therefore, careful monitoring and controls must be implemented to avoid any single cell from experiencing an overvoltage due to excessive charging. Single lithium-based cells require monitoring so that cell voltage does not exceed predefined limits of the chemistry. Series connected lithium cells pose a more complex problem: each cell in the string must be monitored and controlled. Even though the pack voltage may appear to be within acceptable limits, one cell of the series string may be experiencing damaging voltage due to cell-to-cell imbalances. Traditionally, cell-to-cell imbalances in lead-acid batteries have been solved by controlled overcharging [6,7]. Leadacid batteries can be brought into overcharge conditions without permanent cell damage, as the excess energy is released by gassing. This gassing mechanism is the natural method for balancing a series string of lead acid battery cells. Other chemistries, such as NiMH, exhibit similar natural cell-to-cell balancing mechanisms [8]. Because a Lithium battery cannot be overcharged, there is no natural mechanism for cell equalization. Therefore, an alternative method must be employed. This paper discusses three categories of cell balancing methodologies: charging methods, active methods, and passive methods. Cell balancing is necessary for highly transient lithium battery applications, especially those applications where charging occurs frequently, such as regenerative braking in electric vehicle (EV) or hybrid electric vehicle (HEV) applications. Regenerative braking can cause problems for Lithium Ion batteries because the instantaneous regenerative braking current inrush can cause battery voltage to increase suddenly, possibly over the electrolyte breakdown threshold voltage. Deviations in cell behaviors generally occur because of two phenomenon: changes in internal impedance or cell capacity reduction due to aging. In either case, if one cell in a battery pack experiences deviant cell behavior, that cell becomes a likely candidate to overvoltage during high power charging events. Cells with reduced capacity or high internal impedance tend to have large voltage swings when charging and discharging. For HEV applications, it is necessary to cell balance lithium chemistry because of this overvoltage potential. For EV applications, cell balancing is desirable to obtain maximum usable capacity from the battery pack. During charging, an out-of-balance cell may prematurely approach the end-of-charge voltage (typically 4.1 to 4.3 volts/cell) and trigger the charger to turn off. Cell balancing is useful to control the higher voltage cells until the rest of the cells can catch up. In this way, the charger is not turned off until the cells simultaneously reach the end-of-charge voltage. END-OF-CHARGE CELL BALANCING METHODS Typically, cell-balancing methods employed during and at end-of-charging are useful only for electric vehicle purposes. This is because electric vehicle batteries are generally fully charged between each use cycle. Hybrid electric vehicle batteries may or may not be maintained fully charged, resulting in unpredictable end-of-charge conditions to enact the balancing mechanism. Hybrid vehicle batteries also require both high power charge (regenerative braking) and discharge (launch assist or boost) capabilities. For this reason, their batteries are usually maintained at a SOC that can discharge the required power but still have enough headroom to accept the necessary regenerative power. To fully charge the HEV battery for cell balancing would diminish charge acceptance capability (regenerative braking). CHARGE SHUNTING The charge-shunting cell balancing method selectively shunts the charging current around each cell as they become fully charged (Figure 1). This method is most efficiently employed on systems with known charge rates. The shunt resistor R is sized to shunt exactly the charging current I when the fully charged cell voltage V is reached. If the charging current decreases, resistor R will discharge the shunted cell. To avoid extremely large power dissipations due to R, this method is best used with stepped-current chargers with a small end-of-charge current.",
"title": ""
}
] |
[
{
"docid": "afae94714340326278c1629aa4ecc48c",
"text": "The purpose of this investigation was to examine the influence of upper-body static stretching and dynamic stretching on upper-body muscular performance. Eleven healthy men, who were National Collegiate Athletic Association Division I track and field athletes (age, 19.6 +/- 1.7 years; body mass, 93.7 +/- 13.8 kg; height, 183.6 +/- 4.6 cm; bench press 1 repetition maximum [1RM], 106.2 +/- 23.0 kg), participated in this study. Over 4 sessions, subjects participated in 4 different stretching protocols (i.e., no stretching, static stretching, dynamic stretching, and combined static and dynamic stretching) in a balanced randomized order followed by 4 tests: 30% of 1 RM bench throw, isometric bench press, overhead medicine ball throw, and lateral medicine ball throw. Depending on the exercise, test peak power (Pmax), peak force (Fmax), peak acceleration (Amax), peak velocity (Vmax), and peak displacement (Dmax) were measured. There were no differences among stretch trials for Pmax, Fmax, Amax, Vmax, or Dmax for the bench throw or for Fmax for the isometric bench press. For the overhead medicine ball throw, there were no differences among stretch trials for Vmax or Dmax. For the lateral medicine ball throw, there was no difference in Vmax among stretch trials; however, Dmax was significantly larger (p </= 0.05) for the static and dynamic condition compared to the static-only condition. In general, there was no short-term effect of stretching on upper-body muscular performance in young adult male athletes, regardless of stretch mode, potentially due to the amount of rest used after stretching before the performances. Since throwing performance was largely unaffected by static or dynamic upper-body stretching, athletes competing in the field events could perform upper-body stretching, if enough time were allowed before the performance. However, prior studies on lower-body musculature have demonstrated dramatic negative effects on speed and power. Therefore, it is recommended that a dynamic warm-up be used for the entire warm-up.",
"title": ""
},
{
"docid": "17fd082aeebf148294a51bdefdce4403",
"text": "The appearance of Agile methods has been the most noticeable change to software process thinking in the last fifteen years [16], but in fact many of the “Agile ideas” have been around since 70’s or even before. Many studies and reviews have been conducted about Agile methods which ascribe their emergence as a reaction against traditional methods. In this paper, we argue that although Agile methods are new as a whole, they have strong roots in the history of software engineering. In addition to the iterative and incremental approaches that have been in use since 1957 [21], people who criticised the traditional methods suggested alternative approaches which were actually Agile ideas such as the response to change, customer involvement, and working software over documentation. The authors of this paper believe that education about the history of Agile thinking will help to develop better understanding as well as promoting the use of Agile methods. We therefore present and discuss the reasons behind the development and introduction of Agile methods, as a reaction to traditional methods, as a result of people's experience, and in particular focusing on reusing ideas from history.",
"title": ""
},
{
"docid": "7b93d57ea77d234c507f8d155e518ebc",
"text": "A cascade of fully convolutional neural networks is proposed to segment multi-modal Magnetic Resonance (MR) images with brain tumor into background and three hierarchical regions: whole tumor, tumor core and enhancing tumor core. The cascade is designed to decompose the multi-class segmentation problem into a sequence of three binary segmentation problems according to the subregion hierarchy. The whole tumor is segmented in the first step and the bounding box of the result is used for the tumor core segmentation in the second step. The enhancing tumor core is then segmented based on the bounding box of the tumor core segmentation result. Our networks consist of multiple layers of anisotropic and dilated convolution filters, and they are combined with multi-view fusion to reduce false positives. Residual connections and multi-scale predictions are employed in these networks to boost the segmentation performance. Experiments with BraTS 2017 validation set show that the proposed method achieved average Dice scores of 0.7859, 0.9050, 0.8378 for enhancing tumor core, whole tumor and tumor core, respectively. The corresponding values for BraTS 2017 testing set were 0.7831, 0.8739, and 0.7748, respectively.",
"title": ""
},
{
"docid": "117590d8d7a9c4efb9a19e4cd3e220fc",
"text": "We present in this paper the language NoFun for stating component quality in the framework of the ISO/IEC quality standards. The language consists of three different parts. In the first one, software quality characteristics and attributes are defined, probably in a hiera rchical manner. As part of this definition, abstract quality models can be formulated and fu rther refined into more specialised ones. In the second part, values are assigned to component quality basic attributes. In the third one, quality requirements can be stated over components, both context-free (universal quality properties) and context-dependent (quality properties for a given framework -software domain, company, project, etc.). Last, we address to the translation of the language to UML, using its extension mechanisms for capturing the fundamental non-functional concepts.",
"title": ""
},
{
"docid": "d6adda476cc8bd69c37bd2d00f0dace4",
"text": "The conceptualization of a distinct construct known as statistics anxiety has led to the development of numerous rating scales, including the Statistical Anxiety Rating Scale (STARS), designed to assess levels of statistics anxiety. In the current study, the STARS was administered to a sample of 423 undergraduate and graduate students from a midsized, western United States university. The Rasch measurement rating scale model was used to analyze scores from the STARS. Misfitting items were removed from the analysis. In general, items from the six subscales represented a broad range of abilities, with the major exception being a lack of items at the lower extremes of the subscales. Additionally, a differential item functioning (DIF) analysis was performed across sex and student classification. Several items displayed DIF, which indicates subgroups may ascribe different meanings to those items. The paper concludes with several recommendations for researchers considering using the STARS.",
"title": ""
},
{
"docid": "e146a0534b5a81ac6f332332056ae58c",
"text": "Paraphrase identification is an important topic in artificial intelligence and this task is often tackled as sequence alignment and matching. Traditional alignment methods take advantage of attention mechanism, which is a soft-max weighting technique. Weighting technique could pick out the most similar/dissimilar parts, but is weak in modeling the aligned unmatched parts, which are the crucial evidence to identify paraphrase. In this paper, we empower neural architecture with Hungarian algorithm to extract the aligned unmatched parts. Specifically, first, our model applies BiLSTM to parse the input sentences into hidden representations. Then, Hungarian layer leverages the hidden representations to extract the aligned unmatched parts. Last, we apply cosine similarity to metric the aligned unmatched parts for a final discrimination. Extensive experiments show that our model outperforms other baselines, substantially and significantly.",
"title": ""
},
{
"docid": "9516d06751aa51edb0b0a3e2b75e0bde",
"text": "This paper presents a pilot-based compensation algorithm for mitigation of frequency-selective I/Q imbalances in direct-conversion OFDM transmitters. By deploying a feedback loop from RF to baseband, together with a properly-designed pilot signal structure, the I/Q imbalance properties of the transmitter are efficiently estimated in a subcarrier-wise manner. Based on the obtained I/Q imbalance knowledge, the imbalance effects on the actual transmit waveform are then mitigated by baseband pre-distortion acting on the mirror-subcarrier signals. The compensation performance of the proposed structure is analyzed using extensive computer simulations, indicating that very high image rejection ratios can be achieved in practical system set-ups with reasonable pilot signal lengths.",
"title": ""
},
{
"docid": "69be80d84b30099286a36c3e653281d3",
"text": "Since the middle ages, essential oils have been widely used for bactericidal, virucidal, fungicidal, antiparasitical, insecticidal, medicinal and cosmetic applications, especially nowadays in pharmaceutical, sanitary, cosmetic, agricultural and food industries. Because of the mode of extraction, mostly by distillation from aromatic plants, they contain a variety of volatile molecules such as terpenes and terpenoids, phenol-derived aromatic components and aliphatic components. In vitro physicochemical assays characterise most of them as antioxidants. However, recent work shows that in eukaryotic cells, essential oils can act as prooxidants affecting inner cell membranes and organelles such as mitochondria. Depending on type and concentration, they exhibit cytotoxic effects on living cells but are usually non-genotoxic. In some cases, changes in intracellular redox potential and mitochondrial dysfunction induced by essential oils can be associated with their capacity to exert antigenotoxic effects. These findings suggest that, at least in part, the encountered beneficial effects of essential oils are due to prooxidant effects on the cellular level.",
"title": ""
},
{
"docid": "5d5c3c8cc8344a8c5d18313bec9adb04",
"text": "Research in reinforcement learning (RL) has thus far concentrated on two optimality criteria: the discounted framework, which has been very well-studied, and the average-reward framework, in which interest is rapidly increasing. In this paper, we present a framework called sensitive discount optimality which ooers an elegant way of linking these two paradigms. Although sensitive discount optimality has been well studied in dynamic programming, with several provably convergent algorithms, it has not received any attention in RL. This framework is based on studying the properties of the expected cumulative discounted reward, as discounting tends to 1. Under these conditions, the cumulative discounted reward can be expanded using a Laurent series expansion to yields a sequence of terms, the rst of which is the average reward, the second involves the average adjusted sum of rewards (or bias), etc. We use the sensitive discount optimality framework to derive a new model-free average reward technique, which is related to Q-learning type methods proposed by Bertsekas, Schwartz, and Singh, but which unlike these previous methods, optimizes both the rst and second terms in the Laurent series (average reward and bias values). Statement: This paper has not been submitted to any other conference.",
"title": ""
},
{
"docid": "ba94bc5f5762017aed0c307ce89c0558",
"text": "Carsharing has emerged as an alternative to vehicle ownership and is a rapidly expanding global market. Particularly through the flexibility of free-floating models, car sharing complements public transport since customers do not need to return cars to specific stations. We present a novel data analytics approach that provides decision support to car sharing operators -- from local start-ups to global players -- in maneuvering this constantly growing and changing market environment. Using a large set of rental data, as well as zero-inflated and geographically weighted regression models, we derive indicators for the attractiveness of certain areas based on points of interest in their vicinity. These indicators are valuable for a variety of operational and strategic decisions. As a demonstration project, we present a case study of Berlin, where the indicators are used to identify promising regions for business area expansion.",
"title": ""
},
{
"docid": "f93e72b45a185e06d03d15791d312021",
"text": "BACKGROUND\nAbnormal scar development following burn injury can cause substantial physical and psychological distress to children and their families. Common burn scar prevention and management techniques include silicone therapy, pressure garment therapy, or a combination of both. Currently, no definitive, high-quality evidence is available for the effectiveness of topical silicone gel or pressure garment therapy for the prevention and management of burn scars in the paediatric population. Thus, this study aims to determine the effectiveness of these treatments in children.\n\n\nMETHODS\nA randomised controlled trial will be conducted at a large tertiary metropolitan children's hospital in Australia. Participants will be randomised to one of three groups: Strataderm® topical silicone gel only, pressure garment therapy only, or combined Strataderm® topical silicone gel and pressure garment therapy. Participants will include 135 children (45 per group) up to 16 years of age who are referred for scar management for a new burn. Children up to 18 years of age will also be recruited following surgery for burn scar reconstruction. Primary outcomes are scar itch intensity and scar thickness. Secondary outcomes include scar characteristics (e.g. colour, pigmentation, pliability, pain), the patient's, caregiver's and therapist's overall opinion of the scar, health service costs, adherence, health-related quality of life, treatment satisfaction and adverse effects. Measures will be completed on up to two sites per person at baseline and 1 week post scar management commencement, 3 months and 6 months post burn, or post burn scar reconstruction. Data will be analysed using descriptive statistics and univariate and multivariate regression analyses.\n\n\nDISCUSSION\nResults of this study will determine the effectiveness of three noninvasive scar interventions in children at risk of, and with, scarring post burn or post reconstruction.\n\n\nTRIAL REGISTRATION\nAustralian New Zealand Clinical Trials Registry, ACTRN12616001100482 . Registered on 5 August 2016.",
"title": ""
},
{
"docid": "45f9e645fae1f0a131c369164ba4079f",
"text": "Gasification is one of the promising technologies to convert biomass to gaseous fuels for distributed power generation. However, the commercial exploitation of biomass energy suffers from a number of logistics and technological challenges. In this review, the barriers in each of the steps from the collection of biomass to electricity generation are highlighted. The effects of parameters in supply chain management, pretreatment and conversion of biomass to gas, and cleaning and utilization of gas for power generation are discussed. Based on the studies, until recently, the gasification of biomass and gas cleaning are the most challenging part. For electricity generation, either using engine or gas turbine requires a stringent specification of gas composition and tar concentration in the product gas. Different types of updraft and downdraft gasifiers have been developed for gasification and a number of physical and catalytic tar separation methods have been investigated. However, the most efficient and popular one is yet to be developed for commercial purpose. In fact, the efficient gasification and gas cleaning methods can produce highly burnable gas with less tar content, so as to reduce the total consumption of biomass for a desired quantity of electricity generation. According to the recent report, an advanced gasification method with efficient tar cleaning can significantly reduce the biomass consumption, and thus the logistics and biomass pretreatment problems can be ultimately reduced. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "76ef678b28d41317e2409b9fd2109f35",
"text": "Conflicting guidelines for excisions about the alar base led us to develop calibrated alar base excision, a modification of Weir's approach. In approximately 20% of 1500 rhinoplasties this technique was utilized as a final step. Of these patients, 95% had lateral wallexcess (“tall nostrils”), 2% had nostril floor excess (“wide nostrils”), 2% had a combination of these (“tall-wide nostrils”), and 1% had thick nostril rims. Lateral wall excess length is corrected by a truncated crescent excision of the lateral wall above the alar crease. Nasal floor excess is improved by an excision of the nasal sill. Combination noses (e.g., tall-wide) are approached with a combination alar base excision. Finally, noses with thick rims are improved with diamond excision. Closure of the excision is accomplished with fine simple external sutures. Electrocautery is unnecessary and deep sutures are utilized only in wide noses. Few complications were noted. Benefits of this approach include straightforward surgical guidelines, a natural-appearing correction, avoidance of notching or obvious scarring, and it is quick and simple.",
"title": ""
},
{
"docid": "36d7f776d7297f67a136825e9628effc",
"text": "Random walks are at the heart of many existing network embedding methods. However, such algorithms have many limitations that arise from the use of random walks, e.g., the features resulting from these methods are unable to transfer to new nodes and graphs as they are tied to vertex identity. In this work, we introduce the Role2Vec framework which uses the flexible notion of attributed random walks, and serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many others that leverage random walks. Our proposed framework enables these methods to be more widely applicable for both transductive and inductive learning as well as for use on graphs with attributes (if available). This is achieved by learning functions that generalize to new nodes and graphs. We show that our proposed framework is effective with an average AUC improvement of 16.55% while requiring on average 853x less space than existing methods on a variety of graphs.",
"title": ""
},
{
"docid": "544a5a95a169b9ac47960780ac09de80",
"text": "Monte Carlo Tree Search methods have led to huge progress in Computer Go. Still, program performance is uneven most current Go programs are much stronger in some aspects of the game, such as local fighting and positional evaluation, than in others. Well known weaknesses of many programs include the handling of several simultaneous fights, including the “two safe groups” problem, and dealing with coexistence in seki. Starting with a review of MCTS techniques, several conjectures regarding the behavior of MCTS-based Go programs in specific types of Go situations are made. Then, an extensive empirical study of ten leading Go programs investigates their performance of two specifically designed test sets containing “two safe group” and seki situations. The results give a good indication of the state of the art in computer Go as of 2012/2013. They show that while a few of the very top programs can apparently solve most of these evaluation problems in their playouts already, these problems are difficult to solve by global search. ∗[email protected] †[email protected]",
"title": ""
},
{
"docid": "bde03a5d90507314ce5f034b9b764417",
"text": "Autonomous household robots are supposed to accomplish complex tasks like cleaning the dishes which involve both navigation and manipulation within the environment. For navigation, spatial information is mostly sufficient, but manipulation tasks raise the demand for deeper knowledge about objects, such as their types, their functions, or the way how they can be used. We present KNOWROB-MAP, a system for building environment models for robots by combining spatial information about objects in the environment with encyclopedic knowledge about the types and properties of objects, with common-sense knowledge describing what the objects can be used for, and with knowledge derived from observations of human activities by learning statistical relational models. In this paper, we describe the concept and implementation of KNOWROB-MAP and present several examples demonstrating the range of information the system can provide to autonomous robots.",
"title": ""
},
{
"docid": "af40c4fe439738a72ee6b476aeb75f82",
"text": "Object tracking is still a critical and challenging problem with many applications in computer vision. For this challenge, more and more researchers pay attention to applying deep learning to get powerful feature for better tracking accuracy. In this paper, a novel triplet loss is proposed to extract expressive deep feature for object tracking by adding it into Siamese network framework instead of pairwise loss for training. Without adding any inputs, our approach is able to utilize more elements for training to achieve more powerful feature via the combination of original samples. Furthermore, we propose a theoretical analysis by combining comparison of gradients and back-propagation, to prove the effectiveness of our method. In experiments, we apply the proposed triplet loss for three real-time trackers based on Siamese network. And the results on several popular tracking benchmarks show our variants operate at almost the same frame-rate with baseline trackers and achieve superior tracking performance than them, as well as the comparable accuracy with recent state-of-the-art real-time trackers.",
"title": ""
},
{
"docid": "46714f589bdf57d734fc4eff8741d39b",
"text": "As an essential operation in data cleaning, the similarity join has attracted considerable attention from the database community. In this article, we study string similarity joins with edit-distance constraints, which find similar string pairs from two large sets of strings whose edit distance is within a given threshold. Existing algorithms are efficient either for short strings or for long strings, and there is no algorithm that can efficiently and adaptively support both short strings and long strings. To address this problem, we propose a new filter, called the segment filter. We partition a string into a set of segments and use the segments as a filter to find similar string pairs. We first create inverted indices for the segments. Then for each string, we select some of its substrings, identify the selected substrings from the inverted indices, and take strings on the inverted lists of the found substrings as candidates of this string. Finally, we verify the candidates to generate the final answer. We devise efficient techniques to select substrings and prove that our method can minimize the number of selected substrings. We develop novel pruning techniques to efficiently verify the candidates. We also extend our techniques to support normalized edit distance. Experimental results show that our algorithms are efficient for both short strings and long strings, and outperform state-of-the-art methods on real-world datasets.",
"title": ""
},
{
"docid": "d0b5c9c1b5fc4ba3c6a72a902196575a",
"text": "An intrinsic part of information extraction is the creation and manipulation of relations extracted from text. In this article, we develop a foundational framework where the central construct is what we call a document spanner (or just spanner for short). A spanner maps an input string into a relation over the spans (intervals specified by bounding indices) of the string. The focus of this article is on the representation of spanners. Conceptually, there are two kinds of such representations. Spanners defined in a primitive representation extract relations directly from the input string; those defined in an algebra apply algebraic operations to the primitively represented spanners. This framework is driven by SystemT, an IBM commercial product for text analysis, where the primitive representation is that of regular expressions with capture variables.\n We define additional types of primitive spanner representations by means of two kinds of automata that assign spans to variables. We prove that the first kind has the same expressive power as regular expressions with capture variables; the second kind expresses precisely the algebra of the regular spanners—the closure of the first kind under standard relational operators. The core spanners extend the regular ones by string-equality selection (an extension used in SystemT). We give some fundamental results on the expressiveness of regular and core spanners. As an example, we prove that regular spanners are closed under difference (and complement), but core spanners are not. Finally, we establish connections with related notions in the literature.",
"title": ""
},
{
"docid": "851a966bbfee843e5ae1eaf21482ef87",
"text": "The Pittsburgh Sleep Quality Index (PSQI) is a widely used measure of sleep quality in adolescents, but information regarding its psychometric strengths and weaknesses in this population is limited. In particular, questions remain regarding whether it measures one or two sleep quality domains. The aims of the present study were to (a) adapt the PSQI for use in adolescents and young adults, and (b) evaluate the psychometric properties of the adapted measure in this population. The PSQI was slightly modified to make it more appropriate for use in youth populations and was translated into Spanish for administration to the sample population available to the study investigators. It was then administered with validity criterion measures to a community-based sample of Spanish adolescents and young adults (AYA) between 14 and 24 years old (N = 216). The results indicated that the questionnaire (AYA-PSQI-S) assesses a single factor. The total score evidenced good convergent and divergent validity and moderate reliability (Cronbach's alpha = .72). The AYA-PSQI-S demonstrates adequate psychometric properties for use in clinical trials involving adolescents and young adults. Additional research to further evaluate the reliability and validity of the measure for use in clinical settings is warranted.",
"title": ""
}
] |
scidocsrr
|
e1be5d13218f18cafbe1dc3eb021dafd
|
6-Year follow-up of ventral monosegmental spondylodesis of incomplete burst fractures of the thoracolumbar spine using three cortical iliac crest bone grafts
|
[
{
"docid": "83b01384fb6a93d038de1a47f7824f8a",
"text": "In view of the current level of knowledge and the numerous treatment possibilities, none of the existing classification systems of thoracic and lumbar injuries is completely satisfactory. As a result of more than a decade of consideration of the subject matter and a review of 1445 consecutive thoracolumbar injuries, a comprehensive classification of thoracic and lumbar injuries is proposed. The classification is primarily based on pathomorphological criteria. Categories are established according to the main mechanism of injury, pathomorphological uniformity, and in consideration of prognostic aspects regarding healing potential. The classification reflects a progressive scale of morphological damage by which the degree of instability is determined. The severity of the injury in terms of instability is expressed by its ranking within the classification system. A simple grid, the 3-3-3 scheme of the AO fracture classification, was used in grouping the injuries. This grid consists of three types: A, B, and C. Every type has three groups, each of which contains three subgroups with specifications. The types have a fundamental injury pattern which is determined by the three most important mechanisms acting on the spine: compression, distraction, and axial torque. Type A (vertebral body compression) focuses on injury patterns of the vertebral body. Type B injuries (anterior and posterior element injuries with distraction) are characterized by transverse disruption either anteriorly or posteriorly. Type C lesions (anterior and posterior element injuries with rotation) describe injury patterns resulting from axial torque. The latter are most often superimposed on either type A or type B lesions. Morphological criteria are predominantly used for further subdivision of the injuries. Severity progresses from type A through type C as well as within the types, groups, and further subdivisions. The 1445 cases were analyzed with regard to the level of the main injury, the frequency of types and groups, and the incidence of neurological deficit. Most injuries occurred around the thoracolumbar junction. The upper and lower end of the thoracolumbar spine and the T 10 level were most infrequently injured. Type A fractures were found in 66.1 %, type B in 14.5%, and type C in 19.4% of the cases. Stable type Al fractures accounted for 34.7% of the total. Some injury patterns are typical for certain sections of the thoracolumbar spine and others for age groups. The neurological deficit, ranging from complete paraplegia to a single root lesion, was evaluated in 1212 cases. The overall incidence was 22% and increased significantly from type to type: neurological deficit was present in 14% of type A, 32% of type B, and 55% of type C lesions. Only 2% of the Al and 4% of the A2 fractures showed any neurological deficit. The classification is comprehensive as almost any injury can be itemized according to easily recognizable and consistent radiographic and clinical findings. Every injury can be defined alphanumerically or by a descriptive name. The classification can, however, also be used in an abbreviated form without impairment of the information most important for clinical practice. Identification of the fundamental nature of an injury is facilitated by a simple algorithm. Recognizing the nature of the injury, its degree of instability, and prognostic aspects are decisive for the choice of the most appropriate treatment. Experience has shown that the new classification is especially useful in this respect.",
"title": ""
}
] |
[
{
"docid": "38c78be386aa3827f39825f9e40aa3cc",
"text": "Back Side Illumination (BSI) CMOS image sensors with two-layer photo detectors (2LPDs) have been fabricated and evaluated. The test pixel array has green pixels (2.2um x 2.2um) and a magenta pixel (2.2um x 4.4um). The green pixel has a single-layer photo detector (1LPD). The magenta pixel has a 2LPD and a vertical charge transfer (VCT) path to contact a back side photo detector. The 2LPD and the VCT were implemented by high-energy ion implantation from the circuit side. Measured spectral response curves from the 2LPDs fitted well with those estimated based on lightabsorption theory for Silicon detectors. Our measurement results show that the keys to realize the 2LPD in BSI are; (1) the reduction of crosstalk to the VCT from adjacent pixels and (2) controlling the backside photo detector thickness variance to reduce color signal variations.",
"title": ""
},
{
"docid": "1c02a92b4fbabddcefccd4c347186c60",
"text": "Meeting future goals for aircraft and air traffic system performance will require new airframes with more highly integrated propulsion. Previous studies have evaluated hybrid wing body (HWB) configurations with various numbers of engines and with increasing degrees of propulsion-airframe integration. A recently published configuration with 12 small engines partially embedded in a HWB aircraft, reviewed herein, serves as the airframe baseline for the new concept aircraft that is the subject of this paper. To achieve high cruise efficiency, a high lift-to-drag ratio HWB was adopted as the baseline airframe along with boundary layer ingestion inlets and distributed thrust nozzles to fill in the wakes generated by the vehicle. The distributed powered-lift propulsion concept for the baseline vehicle used a simple, high-lift-capable internally blown flap or jet flap system with a number of small high bypass ratio turbofan engines in the airframe. In that concept, the engine flow path from the inlet to the nozzle is direct and does not involve complicated internal ducts through the airframe to redistribute the engine flow. In addition, partially embedded engines, distributed along the upper surface of the HWB airframe, provide noise reduction through airframe shielding and promote jet flow mixing with the ambient airflow. To improve performance and to reduce noise and environmental impact even further, a drastic change in the propulsion system is proposed in this paper. The new concept adopts the previous baseline cruise-efficient short take-off and landing (CESTOL) airframe but employs a number of superconducting motors to drive the distributed fans rather than using many small conventional engines. The power to drive these electric fans is generated by two remotely located gas-turbine-driven superconducting generators. This arrangement allows many small partially embedded fans while retaining the superior efficiency of large core engines, which are physically separated but connected through electric power lines to the fans. This paper presents a brief description of the earlier CESTOL vehicle concept and the newly proposed electrically driven fan concept vehicle, using the previous CESTOL vehicle as a baseline.",
"title": ""
},
{
"docid": "71237e75cb57c3514fe50176a940de09",
"text": "Tuning a pre-trained network is commonly thought to improve data efficiency. However, Kaiming He et al. (2018) have called into question the utility of pre-training by showing that training from scratch can often yield similar performance, should the model train long enough. We show that although pre-training may not improve performance on traditional classification metrics, it does provide large benefits to model robustness and uncertainty. Through extensive experiments on label corruption, class imbalance, adversarial examples, out-of-distribution detection, and confidence calibration, we demonstrate large gains from pre-training and complementary effects with task-specific methods. We show approximately a 30% relative improvement in label noise robustness and a 10% absolute improvement in adversarial robustness on CIFAR10 and CIFAR-100. In some cases, using pretraining without task-specific methods surpasses the state-of-the-art, highlighting the importance of using pre-training when evaluating future methods on robustness and uncertainty tasks.",
"title": ""
},
{
"docid": "398040041440f597b106c49c79be27ea",
"text": "BACKGROUND\nRecently, human germinal center-associated lymphoma (HGAL) gene protein has been proposed as an adjunctive follicular marker to CD10 and BCL6.\n\n\nMETHODS\nOur aim was to evaluate immunoreactivity for HGAL in 82 cases of follicular lymphomas (FLs)--67 nodal, 5 cutaneous and 10 transformed--which were all analysed histologically, by immunohistochemistry and PCR.\n\n\nRESULTS\nImmunostaining for HGAL was more frequently positive (97.6%) than that for BCL6 (92.7%) and CD10 (90.2%) in FLs; the cases negative for bcl6 and/or for CD10 were all positive for HGAL, whereas the two cases negative for HGAL were positive with BCL6; no difference in HGAL immunostaining was found among different malignant subtypes or grades.\n\n\nCONCLUSIONS\nTherefore, HGAL can be used in the immunostaining of FLs as the most sensitive germinal center (GC)-marker; when applied alone, it would half the immunostaining costs, reserving the use of the other two markers only to HGAL-negative cases.",
"title": ""
},
{
"docid": "23190a7fed3673af72563627245d57cd",
"text": "We demonstrate an end-to-end question answering system that integrates BERT with the open-source Anserini information retrieval toolkit. In contrast to most question answering and reading comprehension models today, which operate over small amounts of input text, our system integrates best practices from IR with a BERT-based reader to identify answers from a large corpus of Wikipedia articles in an end-to-end fashion. We report large improvements over previous results on a standard benchmark test collection, showing that fine-tuning pretrained BERT with SQuAD is sufficient to achieve high accuracy in identifying answer spans.",
"title": ""
},
{
"docid": "432e7ae2e76d76dbb42d92cd9103e3d2",
"text": "Previous work has used monolingual parallel corpora to extract and generate paraphrases. We show that this task can be done using bilingual parallel corpora, a much more commonly available resource. Using alignment techniques from phrasebased statistical machine translation, we show how paraphrases in one language can be identified using a phrase in another language as a pivot. We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities, and show how it can be refined to take contextual information into account. We evaluate our paraphrase extraction and ranking methods using a set of manual word alignments, and contrast the quality with paraphrases extracted from automatic alignments.",
"title": ""
},
{
"docid": "cd7210c8c9784bdf56fe72acb4f9e8e2",
"text": "Many-objective (four or more objectives) optimization problems pose a great challenge to the classical Pareto-dominance based multi-objective evolutionary algorithms (MOEAs), such as NSGA-II and SPEA2. This is mainly due to the fact that the selection pressure based on Pareto-dominance degrades severely with the number of objectives increasing. Very recently, a reference-point based NSGA-II, referred as NSGA-III, is suggested to deal with many-objective problems, where the maintenance of diversity among population members is aided by supplying and adaptively updating a number of well-spread reference points. However, NSGA-III still relies on Pareto-dominance to push the population towards Pareto front (PF), leaving room for the improvement of its convergence ability. In this paper, an improved NSGA-III procedure, called θ-NSGA-III, is proposed, aiming to better tradeoff the convergence and diversity in many-objective optimization. In θ-NSGA-III, the non-dominated sorting scheme based on the proposed θ-dominance is employed to rank solutions in the environmental selection phase, which ensures both convergence and diversity. Computational experiments have shown that θ-NSGA-III is significantly better than the original NSGA-III and MOEA/D on most instances no matter in convergence and overall performance.",
"title": ""
},
{
"docid": "ef5769145c4c1ebe06af0c8b5f67e70e",
"text": "Structures of biological macromolecules determined by transmission cryoelectron microscopy (cryo-TEM) and three-dimensional image reconstruction are often displayed as surface-shaded representations with depth cueing along the viewed direction (Z cueing). Depth cueing to indicate distance from the center of virus particles (radial-depth cueing, or R cueing) has also been used. We have found that a style of R cueing in which color is applied in smooth or discontinuous gradients using the IRIS Explorer software is an informative technique for displaying the structures of virus particles solved by cryo-TEM and image reconstruction. To develop and test these methods, we used existing cryo-TEM reconstructions of mammalian reovirus particles. The newly applied visualization techniques allowed us to discern several new structural features, including sites in the inner capsid through which the viral mRNAs may be extruded after they are synthesized by the reovirus transcriptase complexes. To demonstrate the broad utility of the methods, we also applied them to cryo-TEM reconstructions of human rhinovirus, native and swollen forms of cowpea chlorotic mottle virus, truncated core of pyruvate dehydrogenase complex from Saccharomyces cerevisiae, and flagellar filament of Salmonella typhimurium. We conclude that R cueing with color gradients is a useful tool for displaying virus particles and other macromolecules analyzed by cryo-TEM and image reconstruction.",
"title": ""
},
{
"docid": "ff92de8ff0ff78c6ba451d4ce92a189d",
"text": "Recognition of the mode of motion or mode of transit of the user or platform carrying a device is needed in portable navigation, as well as other technological domains. An extensive survey on motion mode recognition approaches is provided in this survey paper. The survey compares and describes motion mode recognition approaches from different viewpoints: usability and convenience, types of devices in terms of setup mounting and data acquisition, various types of sensors used, signal processing methods employed, features extracted, and classification techniques. This paper ends with a quantitative comparison of the performance of motion mode recognition modules developed by researchers in different domains.",
"title": ""
},
{
"docid": "11cfe05879004f225aee4b3bda0ce30b",
"text": "Data mining system contain large amount of private and sensitive data such as healthcare, financial and criminal records. These private and sensitive data can not be share to every one, so privacy protection of data is required in data mining system for avoiding privacy leakage of data. Data perturbation is one of the best methods for privacy preserving. We used data perturbation method for preserving privacy as well as accuracy. In this method individual data value are distorted before data mining application. In this paper we present min max normalization transformation based data perturbation. The privacy parameters are used for measurement of privacy protection and the utility measure shows the performance of data mining technique after data distortion. We performed experiment on real life dataset and the result show that min max normalization transformation based data perturbation method is effective to protect confidential information and also maintain the performance of data mining technique after data distortion.",
"title": ""
},
{
"docid": "f6d1fbb88ae5ff63f73e7faa15c5fab9",
"text": "We propose a simple algorithm to train stochastic neural networks to draw samples from given target distributions for probabilistic inference. Our method is based on iteratively adjusting the neural network parameters so that the output changes along a Stein variational gradient direction (Liu & Wang, 2016) that maximally decreases the KL divergence with the target distribution. Our method works for any target distribution specified by their unnormalized density function, and can train any black-box architectures that are differentiable in terms of the parameters we want to adapt. We demonstrate our method with a number of applications, including variational autoencoder (VAE) with expressive encoders to model complex latent space structures, and hyper-parameter learning of MCMC samplers that allows Bayesian inference to adaptively improve itself when seeing more data.",
"title": ""
},
{
"docid": "2b540b2e48d5c381e233cb71c0cf36fe",
"text": "In this paper we review the most peculiar and interesting information-theoretic and communications features of fading channels. We first describe the statistical models of fading channels which are frequently used in the analysis and design of communication systems. Next, we focus on the information theory of fading channels, by emphasizing capacity as the most important performance measure. Both single-user and multiuser transmission are examined. Further, we describe how the structure of fading channels impacts code design, and finally overview equalization of fading multipath channels.",
"title": ""
},
{
"docid": "559b42198182e15c9868d920ce7f53ca",
"text": "Sweeping has become the workhorse algorithm for cre ating conforming hexahedral meshes of complex model s. This paper describes progress on the automatic, robust generat ion of MultiSwept meshes in CUBIT. MultiSweeping ex t nds the class of volumes that may be swept to include those with mul tiple source and multiple target surfaces. While no t yet perfect, CUBIT’s MultiSweeping has recently become more reliable, an d been extended to assemblies of volumes. Sweep For ging automates the process of making a volume (multi) sweepable: Sweep V rification takes the given source and target sur faces, and automatically classifies curve and vertex types so that sweep lay ers are well formed and progress from sources to ta rge s.",
"title": ""
},
{
"docid": "3c5f3cceeb3fee5e37759b873851ddb6",
"text": "The emergence of GUI is a great progress in the history of computer science and software design. GUI makes human computer interaction more simple and interesting. Python, as a popular programming language in recent years, has not been realized in GUI design. Tkinter has the advantage of native support for Python, but there are too few visual GUI generators supporting Tkinter. This article presents a GUI generator based on Tkinter framework, PyDraw. The design principle of PyDraw and the powerful design concept behind it are introduced in detail. With PyDraw's GUI design philosophy, it can easily design a visual GUI rendering generator for any GUI framework with canvas functionality or programming language with screen display control. This article is committed to conveying PyDraw's GUI free design concept. Through experiments, we have proved the practicability and efficiency of PyDrawd. In order to better convey the design concept of PyDraw, let more enthusiasts join PyDraw update and evolution, we have the source code of PyDraw. At the end of the article, we summarize our experience and express our vision for future GUI design. We believe that the future GUI will play an important role in graphical software programming, the future of less code or even no code programming software design methods must become a focus and hot, free, like drawing GUI will be worth pursuing.",
"title": ""
},
{
"docid": "565b07fee5a5812d04818fa132c0da4c",
"text": "PHP is the most popular scripting language for web applications. Because no native solution to compile or protect PHP scripts exists, PHP applications are usually shipped as plain source code which is easily understood or copied by an adversary. In order to prevent such attacks, commercial products such as ionCube, Zend Guard, and Source Guardian promise a source code protection. In this paper, we analyze the inner working and security of these tools and propose a method to recover the source code by leveraging static and dynamic analysis techniques. We introduce a generic approach for decompilation of obfuscated bytecode and show that it is possible to automatically recover the original source code of protected software. As a result, we discovered previously unknown vulnerabilities and backdoors in 1 million lines of recovered source code of 10 protected applications.",
"title": ""
},
{
"docid": "fbc148e6c44e7315d55f2f5b9a2a2190",
"text": "India contributes about 70% of malaria in the South East Asian Region of WHO. Although annually India reports about two million cases and 1000 deaths attributable to malaria, there is an increasing trend in the proportion of Plasmodium falciparum as the agent. There exists heterogeneity and variability in the risk of malaria transmission between and within the states of the country as many ecotypes/paradigms of malaria have been recognized. The pattern of clinical presentation of severe malaria has also changed and while multi-organ failure is more frequently observed in falciparum malaria, there are reports of vivax malaria presenting with severe manifestations. The high burden populations are ethnic tribes living in the forested pockets of the states like Orissa, Jharkhand, Madhya Pradesh, Chhattisgarh and the North Eastern states which contribute bulk of morbidity and mortality due to malaria in the country. Drug resistance, insecticide resistance, lack of knowledge of actual disease burden along with new paradigms of malaria pose a challenge for malaria control in the country. Considering the existing gaps in reported and estimated morbidity and mortality, need for estimation of true burden of malaria has been stressed. Administrative, financial, technical and operational challenges faced by the national programme have been elucidated. Approaches and priorities that may be helpful in tackling serious issues confronting malaria programme have been outlined.",
"title": ""
},
{
"docid": "5726125455c629340859ef5b214dc18a",
"text": "One of the key challenges in applying reinforcement learning to complex robotic control tasks is the need to gather large amounts of experience in order to find an effective policy for the task at hand. Model-based reinforcement learning can achieve good sample efficiency, but requires the ability to learn a model of the dynamics that is good enough to learn an effective policy. In this work, we develop a model-based reinforcement learning algorithm that combines prior knowledge from previous tasks with online adaptation of the dynamics model. These two ingredients enable highly sample-efficient learning even in regimes where estimating the true dynamics is very difficult, since the online model adaptation allows the method to locally compensate for unmodeled variation in the dynamics. We encode the prior experience into a neural network dynamics model, adapt it online by progressively refitting a local linear model of the dynamics, and use model predictive control to plan under these dynamics. Our experimental results show that this approach can be used to solve a variety of complex robotic manipulation tasks in just a single attempt, using prior data from other manipulation behaviors.",
"title": ""
},
{
"docid": "6e59bd839d6bfe81850033f04013d712",
"text": "A variety of applications employ ensemble learning models, using a collection of decision trees, to quickly and accurately classify an input based on its vector of features. In this paper, we discuss the implementation of such a method, namely Random Forests, as the first machine learning algorithm to be executed on the Automata Processor (AP). The AP is an upcoming reconfigurable co-processor accelerator which supports the execution of numerous automata in parallel against a single input data-flow. Owing to this execution model, our approach is fundamentally di↵erent, translating Random Forest models from existing memory-bound tree-traversal algorithms to pipelined designs that use multiple automata to check all of the required thresholds independently and in parallel. We also describe techniques to handle floatingpoint feature values which are not supported in the native hardware, pipelining of the execution stages, and compression of automata for the fastest execution times. The net result is a solution which when evaluated using two applications, namely handwritten digit recognition and sentiment analysis, produce up to 63 and 93 times speed-up respectively over single-core state-of-the-art CPU-based solutions. We foresee these algorithmic techniques to be useful not only in the acceleration of other applications employing Random Forests, but also in the implementation of other machine learning methods on this novel architecture.",
"title": ""
},
{
"docid": "2210207d9234801710fa2a9c59f83306",
"text": "\"Big Data\" as a term has been among the biggest trends of the last three years, leading to an upsurge of research, as well as industry and government applications. Data is deemed a powerful raw material that can impact multidisciplinary research endeavors as well as government and business performance. The goal of this discussion paper is to share the data analytics opinions and perspectives of the authors relating to the new opportunities and challenges brought forth by the big data movement. The authors bring together diverse perspectives, coming from different geographical locations with different core research expertise and different affiliations and work experiences. The aim of this paper is to evoke discussion rather than to provide a comprehensive survey of big data research.",
"title": ""
},
{
"docid": "8c4540f3724dab3a173e94bdba7b0999",
"text": "The significant growth of the Internet of Things (IoT) is revolutionizing the way people live by transforming everyday Internet-enabled objects into an interconnected ecosystem of digital and personal information accessible anytime and anywhere. As more objects become Internet-enabled, the security and privacy of the personal information generated, processed and stored by IoT devices become complex and challenging to manage. This paper details the current security and privacy challenges presented by the increasing use of the IoT. Furthermore, investigate and analyze the limitations of the existing solutions with regard to addressing security and privacy challenges in IoT and propose a possible solution to address these challenges. The results of this proposed solution could be implemented during the IoT design, building, testing and deployment phases in the real-life environments to minimize the security and privacy challenges associated with IoT.",
"title": ""
}
] |
scidocsrr
|
da024cc8f98be4d87de47654f51578af
|
DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker
|
[
{
"docid": "426a7e1f395213d627cd9fb3b3b561b1",
"text": "In the field of computational game theory, games are often compared in terms of their size. This can be measured in several ways, including the number of unique game states, the number of decision points, and the total number of legal actions over all decision points. These numbers are either known or estimated for a wide range of classic games such as chess and checkers. In the stochastic and imperfect information game of poker, these sizes are easily computed in “limit” games which restrict the players’ available actions, but until now had only been estimated for the more complicated “no-limit” variants. In this paper, we describe a simple algorithm for quickly computing the size of two-player no-limit poker games, provide an implementation of this algorithm, and present for the first time precise counts of the number of game states, information sets, actions and terminal nodes in the no-limit poker games played in the Annual Computer Poker Competition.",
"title": ""
}
] |
[
{
"docid": "08e8629cf29da3532007c5cf5c57d8bb",
"text": "Social networks are growing in number and size, with hundreds of millions of user accounts among them. One added benefit of these networks is that they allow users to encode more information about their relationships than just stating who they know. In this work, we are particularly interested in trust relationships, and how they can be used in designing interfaces. In this paper, we present FilmTrust, a website that uses trust in web-based social networks to create predictive movie recommendations. Using the FilmTrust system as a foundation, we show that these recommendations are more accurate than other techniques when the user’s opinions about a film are divergent from the average. We discuss this technique both as an application of social network analysis, as well as how it suggests other analyses that can be performed to help improve collaborative filtering algorithms of all types.",
"title": ""
},
{
"docid": "991420a2abaf1907ab4f5a1c2dcf823d",
"text": "We are interested in counting the number of instances of object classes in natural, everyday images. Previous counting approaches tackle the problem in restricted domains such as counting pedestrians in surveillance videos. Counts can also be estimated from outputs of other vision tasks like object detection. In this work, we build dedicated models for counting designed to tackle the large variance in counts, appearances, and scales of objects found in natural scenes. Our approach is inspired by the phenomenon of subitizing – the ability of humans to make quick assessments of counts given a perceptual signal, for small count values. Given a natural scene, we employ a divide and conquer strategy while incorporating context across the scene to adapt the subitizing idea to counting. Our approach offers consistent improvements over numerous baseline approaches for counting on the PASCAL VOC 2007 and COCO datasets. Subsequently, we study how counting can be used to improve object detection. We then show a proof of concept application of our counting methods to the task of Visual Question Answering, by studying the how many? questions in the VQA and COCO-QA datasets.",
"title": ""
},
{
"docid": "3354f7fc22837efed183d4c1ce023d3d",
"text": "ETHNOPHARMACOLOGICAL RELEVANCE\nBaphicacanthus cusia root also names \"Nan Ban Lan Gen\" has been traditionally used to prevent and treat influenza A virus infections. Here, we identified a peptide derivative, aurantiamide acetate (compound E17), as an active compound in extracts of B. cusia root. Although studies have shown that aurantiamide acetate possesses antioxidant and anti-inflammatory properties, the effects and mechanism by which it functions as an anti-viral or as an anti-inflammatory during influenza virus infection are poorly defined. Here we investigated the anti-viral activity and possible mechanism of compound E17 against influenza virus infection.\n\n\nMATERIALS AND METHODS\nThe anti-viral activity of compound E17 against Influenza A virus (IAV) was determined using the cytopathic effect (CPE) inhibition assay. Viruses were titrated on Madin-Darby canine kidney (MDCK) cells by plaque assays. Ribonucleoprotein (RNP) luciferase reporter assay was further conducted to investigate the effect of compound E17 on the activity of the viral polymerase complex. HEK293T cells with a stably transfected NF-κB luciferase reporter plasmid were employed to examine the activity of compound E17 on NF-κB activation. Activation of the host signaling pathway induced by IAV infection in the absence or presence of compound E17 was assessed by western blotting. The effect of compound E17 on IAV-induced expression of pro-inflammatory cytokines was measured by real-time quantitative PCR and Luminex assays.\n\n\nRESULTS\nCompound E17 exerted an inhibitory effect on IAV replication in MDCK cells but had no effect on avian IAV and influenza B virus. Treatment with compound E17 resulted in a reduction of RNP activity and virus titers. Compound E17 treatment inhibited the transcriptional activity of NF-κB in a NF-κB luciferase reporter stable HEK293 cell after stimulation with TNF-α. Furthermore, compound E17 blocked the activation of the NF-κB signaling pathway and decreased mRNA expression levels of pro-inflammatory genes in infected cells. Compound E17 also suppressed the production of IL-6, TNF-α, IL-8, IP-10 and RANTES from IAV-infected lung epithelial (A549) cells.\n\n\nCONCLUSIONS\nThese results indicate that compound E17 isolated from B. cusia root has potent anti-viral and anti-inflammatory effects on IAV-infected cells via inhibition of the NF-κB pathway. Therefore, compound E17 could be a potential therapeutic agent for the treatment of influenza.",
"title": ""
},
{
"docid": "02fb30e276b9e49c109ea5618066c183",
"text": "Increasing needs in efficient storage management and better utilization of network bandwidth with less data transfer have led the computing community to consider data compression as a solution. However, compression introduces extra overhead and performance can suffer. The key elements in making the decision to use compression are execution time and compression ratio. Due to negative performance impact, compression is often neglected. General purpose computing on graphic processing units (GPUs) introduces new opportunities where parallelism is available. Our work targets the use of opportunities in GPU based systems by exploiting parallelism in compression algorithms. In this paper we present an implementation of the Lempel-Ziv-Storer-Szymanski (LZSS) loss less data compression algorithm by using NVIDIA GPUs Compute Unified Device Architecture (CUDA) Framework. Our implementation of the LZSS algorithm on GPUs significantly improves the performance of the compression process compared to CPU based implementation without any loss in compression ratio. This can support GPU based clusters in solving application bandwidth problems. Our system outperforms the serial CPU LZSS implementation by up to 18x, the parallel threaded version up to 3x and the BZIP2 program by up to 6x in terms of compression time, showing the promise of CUDA systems in loss less data compression. To give the programmers an easy to use tool, our work also provides an API for in memory compression without the need for reading from and writing to files, in addition to the version involving I/O.",
"title": ""
},
{
"docid": "a1f838270925e4769e15edfb37b281fd",
"text": "Assess extensor carpi ulnaris (ECU) tendon position in the ulnar groove, determine the frequency of tendon “dislocation” with the forearm prone, neutral, and supine, and determine if an association exists between ulnar groove morphology and tendon position in asymptomatic volunteers. Axial proton density-weighted MR was performed through the distal radioulnar joint with the forearm prone, neutral, and supine in 38 asymptomatic wrists. The percentage of the tendon located beyond the ulnar-most border of the ulnar groove was recorded. Ulnar groove depth and length was measured and ECU tendon signal was assessed. 15.8 % of tendons remained within the groove in all forearm positions. In 76.3 %, the tendon translated medially from prone to supine. The tendon “dislocated” in 0, 10.5, and 39.5 % with the forearm prone, neutral and supine, respectively. In 7.9 % prone, 5.3 % neutral, and 10.5 % supine exams, the tendon was 51–99 % beyond the ulnar border of the ulnar groove. Mean ulnar groove depth and length were 1.6 and 7.7 mm, respectively, with an overall trend towards greater degrees of tendon translation in shorter, shallower ulnar grooves. The ECU tendon shifts in a medial direction when the forearm is supine; however, tendon “dislocation” has not been previously documented in asymptomatic volunteers. The ECU tendon medially translated or frankly dislocated from the ulnar groove in the majority of our asymptomatic volunteers, particularly when the forearm is supine. Overall greater degrees of tendon translation were observed in shorter and shallower ulnar grooves.",
"title": ""
},
{
"docid": "c8dbc63f90982e05517bbdb98ebaeeb5",
"text": "Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word–emotion and word–polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotionannotation questions, and show that asking if a term is associated with an emotion leads to markedly higher inter-annotator agreement than that obtained by asking if a term evokes an emotion.",
"title": ""
},
{
"docid": "8883e758297e13a1b3cc3cf2dfc1f6c4",
"text": "Melanoma mortality rates are the highest amongst skin cancer patients. Melanoma is life threating when it grows beyond the dermis of the skin. Hence, depth is an important factor to diagnose melanoma. This paper introduces a non-invasive computerized dermoscopy system that considers the estimated depth of skin lesions for diagnosis. A 3-D skin lesion reconstruction technique using the estimated depth obtained from regular dermoscopic images is presented. On basis of the 3-D reconstruction, depth and 3-D shape features are extracted. In addition to 3-D features, regular color, texture, and 2-D shape features are also extracted. Feature extraction is critical to achieve accurate results. Apart from melanoma, in-situ melanoma the proposed system is designed to diagnose basal cell carcinoma, blue nevus, dermatofibroma, haemangioma, seborrhoeic keratosis, and normal mole lesions. For experimental evaluations, the PH2, ISIC: Melanoma Project, and ATLAS dermoscopy data sets is considered. Different feature set combinations is considered and performance is evaluated. Significant performance improvement is reported the post inclusion of estimated depth and 3-D features. The good classification scores of sensitivity = 96%, specificity = 97% on PH2 data set and sensitivity = 98%, specificity = 99% on the ATLAS data set is achieved. Experiments conducted to estimate tumor depth from 3-D lesion reconstruction is presented. Experimental results achieved prove that the proposed computerized dermoscopy system is efficient and can be used to diagnose varied skin lesion dermoscopy images.",
"title": ""
},
{
"docid": "b0709248d08564b7d1a1f23243aa0946",
"text": "TrustZone-based Real-time Kernel Protection (TZ-RKP) is a novel system that provides real-time protection of the OS kernel using the ARM TrustZone secure world. TZ-RKP is more secure than current approaches that use hypervisors to host kernel protection tools. Although hypervisors provide privilege and isolation, they face fundamental security challenges due to their growing complexity and code size. TZ-RKP puts its security monitor, which represents its entire Trusted Computing Base (TCB), in the TrustZone secure world; a safe isolated environment that is dedicated to security services. Hence, the security monitor is safe from attacks that can potentially compromise the kernel, which runs in the normal world. Using the secure world for kernel protection has been crippled by the lack of control over targets that run in the normal world. TZ-RKP solves this prominent challenge using novel techniques that deprive the normal world from the ability to control certain privileged system functions. These functions are forced to route through the secure world for inspection and approval before being executed. TZ-RKP's control of the normal world is non-bypassable. It can effectively stop attacks that aim at modifying or injecting kernel binaries. It can also stop attacks that involve modifying the system memory layout, e.g, through memory double mapping. This paper presents the implementation and evaluation of TZ-RKP, which has gone through rigorous and thorough evaluation of effectiveness and performance. It is currently deployed on the latest models of the Samsung Galaxy series smart phones and tablets, which clearly demonstrates that it is a practical real-world system.",
"title": ""
},
{
"docid": "005b2190e587b7174e844d7b88080ea0",
"text": "This paper presents the impact of automatic feature extraction used in a deep learning architecture such as Convolutional Neural Network (CNN). Recently CNN has become a very popular tool for image classification which can automatically extract features, learn and classify them. It is a common belief that CNN can always perform better than other well-known classifiers. However, there is no systematic study which shows that automatic feature extraction in CNN is any better than other simple feature extraction techniques, and there is no study which shows that other simple neural network architectures cannot achieve same accuracy as CNN. In this paper, a systematic study to investigate CNN's feature extraction is presented. CNN with automatic feature extraction is firstly evaluated on a number of benchmark datasets and then a simple traditional Multi-Layer Perceptron (MLP) with full image, and manual feature extraction are evaluated on the same benchmark datasets. The purpose is to see whether feature extraction in CNN performs any better than a simple feature with MLP and full image with MLP. Many experiments were systematically conducted by varying number of epochs and hidden neurons. The experimental results revealed that traditional MLP with suitable parameters can perform as good as CNN or better in certain cases.",
"title": ""
},
{
"docid": "a5bfeab5278eb5bbe45faac0535f0b81",
"text": "In modern computer systems, system event logs have always been the primary source for checking system status. As computer systems become more and more complex, the interaction between software and hardware increases frequently. The components will generate enormous log information, including running reports and fault information. The sheer quantity of data is a great challenge for analysis relying on the manual method. In this paper, we implement a management and analysis system of log information, which can assist system administrators to understand the real-time status of the entire system, classify logs into different fault types, and determine the root cause of the faults. In addition, we improve the existing fault correlation analysis method based on the results of system log classification. We apply the system in a cloud computing environment for evaluation. The results show that our system can classify fault logs automatically and effectively. With the proposed system, administrators can easily detect the root cause of faults.",
"title": ""
},
{
"docid": "d7ec815dd17e8366b5238022339a0a14",
"text": "V marketing is a form of peer-to-peer communication in which individuals are encouraged to pass on promotional messages within their social networks. Conventional wisdom holds that the viral marketing process is both random and unmanageable. In this paper, we deconstruct the process and investigate the formation of the activated digital network as distinct from the underlying social network. We then consider the impact of the social structure of digital networks (random, scale free, and small world) and of the transmission behavior of individuals on campaign performance. Specifically, we identify alternative social network models to understand the mediating effects of the social structures of these models on viral marketing campaigns. Next, we analyse an actual viral marketing campaign and use the empirical data to develop and validate a computer simulation model for viral marketing. Finally, we conduct a number of simulation experiments to predict the spread of a viral message within different types of social network structures under different assumptions and scenarios. Our findings confirm that the social structure of digital networks play a critical role in the spread of a viral message. Managers seeking to optimize campaign performance should give consideration to these findings before designing and implementing viral marketing campaigns. We also demonstrate how a simulation model is used to quantify the impact of campaign management inputs and how these learnings can support managerial decision making.",
"title": ""
},
{
"docid": "90709f620b27196fdc7fc380e3757518",
"text": "The bag-of-visual-words (BoVW) method with construction of a single dictionary of visual words has been used previously for a variety of classification tasks in medical imaging, including the diagnosis of liver lesions. In this paper, we describe a novel method for automated diagnosis of liver lesions in portal-phase computed tomography (CT) images that improves over single-dictionary BoVW methods by using an image patch representation of the interior and boundary regions of the lesions. Our approach captures characteristics of the lesion margin and of the lesion interior by creating two separate dictionaries for the margin and the interior regions of lesions (“dual dictionaries” of visual words). Based on these dictionaries, visual word histograms are generated for each region of interest within the lesion and its margin. For validation of our approach, we used two datasets from two different institutions, containing CT images of 194 liver lesions (61 cysts, 80 metastasis, and 53 hemangiomas). The final diagnosis of each lesion was established by radiologists. The classification accuracy for the images from the two institutions was 99% and 88%, respectively, and 93% for a combined dataset. Our new BoVW approach that uses dual dictionaries shows promising results. We believe the benefits of our approach may generalize to other application domains within radiology.",
"title": ""
},
{
"docid": "f961d007102f3c56a94772eff0b6961a",
"text": "Syllogisms are arguments about the properties of entities. They consist of 2 premises and a conclusion, which can each be in 1 of 4 \"moods\": All A are B, Some A are B, No A are B, and Some A are not B. Their logical analysis began with Aristotle, and their psychological investigation began over 100 years ago. This article outlines the logic of inferences about syllogisms, which includes the evaluation of the consistency of sets of assertions. It also describes the main phenomena of reasoning about properties. There are 12 extant theories of such inferences, and the article outlines each of them and describes their strengths and weaknesses. The theories are of 3 main sorts: heuristic theories that capture principles that could underlie intuitive responses, theories of deliberative reasoning based on formal rules of inference akin to those of logic, and theories of deliberative reasoning based on set-theoretic diagrams or models. The article presents a meta-analysis of these extant theories of syllogisms using data from 6 studies. None of the 12 theories provides an adequate account, and so the article concludes with a guide-based on its qualitative and quantitative analyses-of how best to make progress toward a satisfactory theory.",
"title": ""
},
{
"docid": "88d85a7b471b7d8229221cf1186b389d",
"text": "The Internet of Things (IoT) is a vision which real-world objects are part of the internet. Every object is uniquely identified, and accessible to the network. There are various types of communication protocol for connect the device to the Internet. One of them is a Low Power Wide Area Network (LPWAN) which is a novel technology use to implement IoT applications. There are many platforms of LPWAN such as NB-IoT, LoRaWAN. In this paper, the experimental performance evaluation of LoRaWAN over a real environment in Bangkok, Thailand is presented. From these experimental results, the communication ranges in both an outdoor and an indoor environment are limited. Hence, the IoT application with LoRaWAN technology can be reliable in limited of communication ranges.",
"title": ""
},
{
"docid": "4c811ed0f6c69ca5485f6be7d950df89",
"text": "Fairness has emerged as an important category of analysis for machine learning systems in some application areas. In extending the concept of fairness to recommender systems, there is an essential tension between the goals of fairness and those of personalization. However, there are contexts in which equity across recommendation outcomes is a desirable goal. It is also the case that in some applications fairness may be a multisided concept, in which the impacts on multiple groups of individuals must be considered. In this paper, we examine two different cases of fairness-aware recommender systems: consumer-centered and provider-centered. We explore the concept of a balanced neighborhood as a mechanism to preserve personalization in recommendation while enhancing the fairness of recommendation outcomes. We show that a modified version of the Sparse Linear Method (SLIM) can be used to improve the balance of user and item neighborhoods, with the result of achieving greater outcome fairness in real-world datasets with minimal loss in ranking performance.",
"title": ""
},
{
"docid": "54380a4e0ab433be24d100db52e6bb55",
"text": "Why do some new technologies emerge and quickly supplant incumbent technologies while others take years or decades to take off? We explore this question by presenting a framework that considers both the focal competing technologies as well as the ecosystems in which they are embedded. Within our framework, each episode of technology transition is characterized by the ecosystem emergence challenge that confronts the new technology and the ecosystem extension opportunity that is available to the old technology. We identify four qualitatively distinct regimes with clear predictions for the pace of substitution. Evidence from 10 episodes of technology transitions in the semiconductor lithography equipment industry from 1972 to 2009 offers strong support for our framework. We discuss the implication of our approach for firm strategy. Disciplines Management Sciences and Quantitative Methods This journal article is available at ScholarlyCommons: https://repository.upenn.edu/mgmt_papers/179 Innovation Ecosystems and the Pace of Substitution: Re-examining Technology S-curves Ron Adner Tuck School of Business, Dartmouth College Strategy and Management 100 Tuck Hall Hanover, NH 03755, USA Tel: 1 603 646 9185 Email:\t\r [email protected] Rahul Kapoor The Wharton School University of Pennsylvania Philadelphia, PA-19104 Tel : 1 215 898 6458 Email: [email protected]",
"title": ""
},
{
"docid": "1c117c63455c2b674798af0e25e3947c",
"text": "We are studying the manufacturing performance of semiconductor wafer fabrication plants in the US, Asia, and Europe. There are great similarities in production equipment, manufacturing processes, and products produced at semiconductor fabs around the world. However, detailed comparisons over multi-year intervals show that important quantitative indicators of productivity, including defect density (yield), major equipment production rates, wafer throughput time, and effective new process introduction to manufacturing, vary by factors of 3 to as much as 5 across an international sample of 28 fabs. We conduct on-site observations, and interviews with manufacturing personnel at all levels from operator to general manager, to better understand reasons for the observed wide variations in performance. We have identified important factors in the areas of information systems, organizational practices, process and technology improvements, and production control that correlate strongly with high productivity. Optimum manufacturing strategy is different for commodity products, high-value proprietary products, and foundry business.",
"title": ""
},
{
"docid": "10cc52c08da8118a220e436bc37e8beb",
"text": "The most common approach in text mining classification tasks is to rely on features like words, part-of-speech tags, stems, or some other high-level linguistic features. Unlike the common approach, we present a method that uses only character p-grams (also known as n-grams) as features for the Arabic Dialect Identification (ADI) Closed Shared Task of the DSL 2016 Challenge. The proposed approach combines several string kernels using multiple kernel learning. In the learning stage, we try both Kernel Discriminant Analysis (KDA) and Kernel Ridge Regression (KRR), and we choose KDA as it gives better results in a 10-fold cross-validation carried out on the training set. Our approach is shallow and simple, but the empirical results obtained in the ADI Shared Task prove that it achieves very good results. Indeed, we ranked on the second place with an accuracy of 50.91% and a weighted F1 score of 51.31%. We also present improved results in this paper, which we obtained after the competition ended. Simply by adding more regularization into our model to make it more suitable for test data that comes from a different distribution than training data, we obtain an accuracy of 51.82% and a weighted F1 score of 52.18%. Furthermore, the proposed approach has an important advantage in that it is language independent and linguistic theory neutral, as it does not require any NLP tools.",
"title": ""
},
{
"docid": "8891a6c47a7446bb7597471796900867",
"text": "The component \"thing\" of the Internet of Things does not yet exist in current business process modeling standards. The \"thing\" is the essential and central concept of the Internet of Things, and without its consideration we will not be able to model the business processes of the future, which will be able to measure or change states of objects in our real-world environment. The presented approach focuses on integrating the concept of the Internet of Things into the meta-model of the process modeling standard BPMN 2.0 as standard-conform as possible. By a terminological and conceptual delimitation, three components of the standard are examined and compared towards a possible expansion. By implementing the most appropriate solution, the new thing concept becomes usable for modelers, both as a graphical and machine-readable element.",
"title": ""
},
{
"docid": "66133239610bb08d83fb37f2c11a8dc5",
"text": "sists of two excitation laser beams. One beam scans the volume of the brain from the side of a horizontally positioned zebrafish but is rapidly switched off when inside an elliptical exclusion region located over the eye (Fig. 1b). Simultaneously, a second beam scans from the front, to cover the forebrain and the regions between the eyes. Together, these two beams achieve nearly complete coverage of the brain without exposing the retina to direct laser excitation, which allows unimpeded presentation of visual stimuli that are projected onto a screen below the fish. To monitor intended swimming behavior, we used existing methods for recording activity from motor neuron axons in the tail of paralyzed larval zebrafish1 (Fig. 1a and Supplementary Note). This system provides imaging speeds of up to three brain volumes per second (40 planes per brain volume); increases in camera speed will allow for faster volumetric sampling. Because light-sheet imaging may still introduce some additional sensory stimulation (excitation light scattering in the brain and reflected from the glass walls of the chamber), we assessed whether fictive behavior in 5–7 d post-fertilization (d.p.f.) fish was robust to the presence of the light sheets. We tested two visuoLight-sheet functional imaging in fictively behaving zebrafish",
"title": ""
}
] |
scidocsrr
|
02041c4237e3afe2f0040c7beb453c57
|
Training a text classifier with a single word using Twitter Lists and domain adaptation
|
[
{
"docid": "8d0c5de2054b7c6b4ef97a211febf1d0",
"text": "This paper revisits the problem of optimal learning and decision-making when different misclassification errors incur different penalties. We characterize precisely but intuitively when a cost matrix is reasonable, and we show how to avoid the mistake of defining a cost matrix that is economically incoherent. For the two-class case, we prove a theorem that shows how to change the proportion of negative examples in a training set in order to make optimal cost-sensitive classification decisions using a classifier learned by a standard non-costsensitive learning method. However, we then argue that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods. Accordingly, the recommended way of applying one of these methods in a domain with differing misclassification costs is to learn a classifier from the training set as given, and then to compute optimal decisions explicitly using the probability estimates given by the classifier. 1 Making decisions based on a cost matrix Given a specification of costs for correct and incorrect predictions, an example should be predicted to have the class that leads to the lowest expected cost, where the expectatio n is computed using the conditional probability of each class given the example. Mathematically, let the (i; j) entry in a cost matrixC be the cost of predicting class i when the true class isj. If i = j then the prediction is correct, while if i 6= j the prediction is incorrect. The optimal prediction for an examplex is the classi that minimizes L(x; i) =Xj P (jjx)C(i; j): (1) Costs are not necessarily monetary. A cost can also be a waste of time, or the severity of an illness, for example. For eachi,L(x; i) is a sum over the alternative possibilities for the true class of x. In this framework, the role of a learning algorithm is to produce a classifier that for any example x can estimate the probability P (jjx) of each classj being the true class ofx. For an examplex, making the predictioni means acting as if is the true class of x. The essence of cost-sensitive decision-making is that it can be optimal to ct as if one class is true even when some other class is more probable. For example, it can be rational not to approve a large credit card transaction even if the transaction is mos t likely legitimate. 1.1 Cost matrix properties A cost matrixC always has the following structure when there are only two classes: actual negative actual positive predict negative C(0; 0) = 00 C(0; 1) = 01 predict positive C(1; 0) = 10 C(1; 1) = 11 Recent papers have followed the convention that cost matrix rows correspond to alternative predicted classes, whi le columns correspond to actual classes, i.e. row/column = i/j predicted/actual. In our notation, the cost of a false positive is 10 while the cost of a false negative is 01. Conceptually, the cost of labeling an example incorrectly should always be greater th an the cost of labeling it correctly. Mathematically, it shoul d always be the case that 10 > 00 and 01 > 11. We call these conditions the “reasonableness” conditions. Suppose that the first reasonableness condition is violated , so 00 10 but still 01 > 11. In this case the optimal policy is to label all examples positive. Similarly, if 10 > 00 but 11 01 then it is optimal to label all examples negative. We leave the case where both reasonableness conditions are violated for the reader to analyze. Margineantu[2000] has pointed out that for some cost matrices, some class labels are never predicted by the optimal policy as given by Equation (1). We can state a simple, intuitive criterion for when this happens. Say that row m dominates rown in a cost matrixC if for all j,C(m; j) C(n; j). In this case the cost of predictingis no greater than the cost of predictingm, regardless of what the true class j is. So it is optimal never to predict m. As a special case, the optimal prediction is alwaysn if row n is dominated by all other rows in a cost matrix. The two reasonableness conditions for a two-class cost matrix imply that neither row in the matrix dominates the other. Given a cost matrix, the decisions that are optimal are unchanged if each entry in the matrix is multiplied by a positiv e constant. This scaling corresponds to changing the unit of account for costs. Similarly, the decisions that are optima l are unchanged if a constant is added to each entry in the matrix. This shifting corresponds to changing the baseline aw ay from which costs are measured. By scaling and shifting entries, any two-class cost matrix that satisfies the reasonab leness conditions can be transformed into a simpler matrix tha t always leads to the same decisions:",
"title": ""
},
{
"docid": "f7d535f9a5eeae77defe41318d642403",
"text": "On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.",
"title": ""
},
{
"docid": "853ef57bfa4af5edf4ee3c8a46e4b4f4",
"text": "Hidden properties of social media users, such as their ethnicity, gender, and location, are often reflected in their observed attributes, such as their first and last names. Furthermore, users who communicate with each other often have similar hidden properties. We propose an algorithm that exploits these insights to cluster the observed attributes of hundreds of millions of Twitter users. Attributes such as user names are grouped together if users with those names communicate with other similar users. We separately cluster millions of unique first names, last names, and userprovided locations. The efficacy of these clusters is then evaluated on a diverse set of classification tasks that predict hidden users properties such as ethnicity, geographic location, gender, language, and race, using only profile names and locations when appropriate. Our readily-replicable approach and publiclyreleased clusters are shown to be remarkably effective and versatile, substantially outperforming state-of-the-art approaches and human accuracy on each of the tasks studied.",
"title": ""
}
] |
[
{
"docid": "3531efcf8308541b0187b2ea4ab91721",
"text": "This paper proposed a novel controlling technique of pulse width modulation (PWM) mode and pulse frequency modulation (PFM) mode to keep the high efficiency within width range of loading. The novel control method is using PWM and PFM detector to achieve two modes switching appropriately. The controlling technique can make the efficiency of current mode DC-DC buck converter up to 88% at light loading and this paper is implemented by TSMC 0.35 mum CMOS process.",
"title": ""
},
{
"docid": "9747be055df9acedfdfe817eb7e1e06e",
"text": "Text summarization solves the problem of extracting important information from huge amount of text data. There are various methods in the literature that aim to find out well-formed summaries. One of the most commonly used methods is the Latent Semantic Analysis (LSA). In this paper, different LSA based summarization algorithms are explained and two new LSA based summarization algorithms are proposed. The algorithms are evaluated on Turkish documents, and their performances are compared using their ROUGE-L scores. One of our algorithms produces the best scores.",
"title": ""
},
{
"docid": "70dd5d85ec3864b8c6cbd6b300f4640f",
"text": "An F-RAN is presented in this article as a promising paradigm for the fifth generation wireless communication system to provide high spectral and energy efficiency. The core idea is to take full advantage of local radio signal processing, cooperative radio resource management, and distributed storing capabilities in edge devices, which can decrease the heavy burden on fronthaul and avoid large-scale radio signal processing in the centralized baseband unit pool. This article comprehensively presents the system architecture and key techniques of F-RANs. In particular, key techniques and their corresponding solutions, including transmission mode selection and interference suppression, are discussed. Open issues in terms of edge caching, software-defined networking, and network function virtualization are also identified.",
"title": ""
},
{
"docid": "4deea3312fe396f81919b07462551682",
"text": "The purpose of this paper is to explore applications of blockchain technology related to the 4th Industrial Revolution (Industry 4.0) and to present an example where blockchain is employed to facilitate machine-to-machine (M2M) interactions and establish a M2M electricity market in the context of the chemical industry. The presented scenario includes two electricity producers and one electricity consumer trading with each other over a blockchain. The producers publish exchange offers of energy (in kWh) for currency (in USD) in a data stream. The consumer reads the offers, analyses them and attempts to satisfy its energy demand at a minimum cost. When an offer is accepted it is executed as an atomic exchange (multiple simultaneous transactions). Additionally, this paper describes and discusses the research and application landscape of blockchain technology in relation to the Industry 4.0. It concludes that this technology has significant under-researched potential to support and enhance the efficiency gains of the revolution and identifies areas for future research. Producer 2 • Issue energy • Post purchase offers (as atomic transactions) Consumer • Look through the posted offers • Choose cheapest and satisfy its own demand Blockchain Stream Published offers are visible here Offer sent",
"title": ""
},
{
"docid": "f6ae71fee81a8560f37cb0dccfd1e3cd",
"text": "Linguistic research to date has determined many of the principles that govern the structure of the spatial schemas represented by closed-class forms across the world’s languages. contributing to this cumulative understanding have, for example, been Gruber 1965, Fillmore 1968, Leech 1969, Clark 1973, Bennett 1975, Herskovits 1982, Jackendoff 1983, Zubin and Svorou 1984, as well as myself, Talmy 1983, 2000a, 2000b). It is now feasible to integrate these principles and to determine the comprehensive system they belong to for spatial structuring in spoken language. The finding here is that this system has three main parts: the componential, the compositional, and the augmentive.",
"title": ""
},
{
"docid": "db3bb02dde6c818b173cf12c9c7440b7",
"text": "PURPOSE\nThe authors conducted a systematic review of the published literature on social media use in medical education to answer two questions: (1) How have interventions using social media tools affected outcomes of satisfaction, knowledge, attitudes, and skills for physicians and physicians-in-training? and (2) What challenges and opportunities specific to social media have educators encountered in implementing these interventions?\n\n\nMETHOD\nThe authors searched the MEDLINE, CINAHL, ERIC, Embase, PsycINFO, ProQuest, Cochrane Library, Web of Science, and Scopus databases (from the start of each through September 12, 2011) using keywords related to social media and medical education. Two authors independently reviewed the search results to select peer-reviewed, English-language articles discussing social media use in educational interventions at any level of physician training. They assessed study quality using the Medical Education Research Study Quality Instrument.\n\n\nRESULTS\nFourteen studies met inclusion criteria. Interventions using social media tools were associated with improved knowledge (e.g., exam scores), attitudes (e.g., empathy), and skills (e.g., reflective writing). The most commonly reported opportunities related to incorporating social media tools were promoting learner engagement (71% of studies), feedback (57%), and collaboration and professional development (both 36%). The most commonly cited challenges were technical issues (43%), variable learner participation (43%), and privacy/security concerns (29%). Studies were generally of low to moderate quality; there was only one randomized controlled trial.\n\n\nCONCLUSIONS\nSocial media use in medical education is an emerging field of scholarship that merits further investigation. Educators face challenges in adapting new technologies, but they also have opportunities for innovation.",
"title": ""
},
{
"docid": "1e3e1e272f1b8da8f28cc9d3a338bfc6",
"text": "In an attempt to preserve the structural information in malware binaries during feature extraction, function call graph-based features have been used in various research works in malware classification. However, the approach usually employed when performing classification on these graphs, is based on computing graph similarity using computationally intensive techniques. Due to this, much of the previous work in this area incurred large performance overhead and does not scale well. In this paper, we propose a linear time function call graph (FCG) vector representation based on function clustering that has significant performance gains in addition to improved classification accuracy. We also show how this representation can enable using graph features together with other non-graph features.",
"title": ""
},
{
"docid": "744162ac558f212f73327ba435c1d578",
"text": "Massive classification, a classification task defined over a vast number of classes (hundreds of thousands or even millions), has become an essential part of many real-world systems, such as face recognition. Existing methods, including the deep networks that achieved remarkable success in recent years, were mostly devised for problems with a moderate number of classes. They would meet with substantial difficulties, e.g. excessive memory demand and computational cost, when applied to massive problems. We present a new method to tackle this problem. This method can efficiently and accurately identify a small number of “active classes” for each mini-batch, based on a set of dynamic class hierarchies constructed on the fly. We also develop an adaptive allocation scheme thereon, which leads to a better tradeoff between performance and cost. On several large-scale benchmarks, our method significantly reduces the training cost and memory demand, while maintaining competitive performance.",
"title": ""
},
{
"docid": "80058ed2de002f05c9f4c1451c53e69c",
"text": "Purchasing decisions in many product categories are heavily influenced by the shopper's aesthetic preferences. It's insufficient to simply match a shopper with popular items from the category in question; a successful shopping experience also identifies products that match those aesthetics. The challenge of capturing shoppers' styles becomes more difficult as the size and diversity of the marketplace increases. At Etsy, an online marketplace for handmade and vintage goods with over 30 million diverse listings, the problem of capturing taste is particularly important -- users come to the site specifically to find items that match their eclectic styles.\n In this paper, we describe our methods and experiments for deploying two new style-based recommender systems on the Etsy site. We use Latent Dirichlet Allocation (LDA) to discover trending categories and styles on Etsy, which are then used to describe a user's \"interest\" profile. We also explore hashing methods to perform fast nearest neighbor search on a map-reduce framework, in order to efficiently obtain recommendations. These techniques have been implemented successfully at very large scale, substantially improving many key business metrics.",
"title": ""
},
{
"docid": "b36058bcfcb5f5f4084fe131c42b13d9",
"text": "We present regular linear temporal logic (RLTL), a logic that generalizes linear temporal logic with the ability to use regular expressions arbitrarily as sub-expressions. Every LTL operator can be defined as a context in regular linear temporal logic. This implies that there is a (linear) translation from LTL to RLTL. Unlike LTL, regular linear temporal logic can define all ω-regular languages, while still keeping the satisfiability problem in PSPACE. Unlike the extended temporal logics ETL∗, RLTL is defined with an algebraic signature. In contrast to the linear time μ-calculus, RLTL does not depend on fix-points in its syntax.",
"title": ""
},
{
"docid": "f690ffa886d3ed44a6b2171226cd151f",
"text": "Gold nanoparticles are widely used in biomedical imaging and diagnostic tests. Based on their established use in the laboratory and the chemical stability of Au(0), gold nanoparticles were expected to be safe. The recent literature, however, contains conflicting data regarding the cytotoxicity of gold nanoparticles. Against this background a systematic study of water-soluble gold nanoparticles stabilized by triphenylphosphine derivatives ranging in size from 0.8 to 15 nm is made. The cytotoxicity of these particles in four cell lines representing major functional cell types with barrier and phagocyte function are tested. Connective tissue fibroblasts, epithelial cells, macrophages, and melanoma cells prove most sensitive to gold particles 1.4 nm in size, which results in IC(50) values ranging from 30 to 56 microM depending on the particular 1.4-nm Au compound-cell line combination. In contrast, gold particles 15 nm in size and Tauredon (gold thiomalate) are nontoxic at up to 60-fold and 100-fold higher concentrations, respectively. The cellular response is size dependent, in that 1.4-nm particles cause predominantly rapid cell death by necrosis within 12 h while closely related particles 1.2 nm in diameter effect predominantly programmed cell death by apoptosis.",
"title": ""
},
{
"docid": "53d4995e474fb897fb6f10cef54c894f",
"text": "The measurement of different target parameters using radar systems has been an active research area for the last decades. Particularly target angle measurement is a very demanding topic, because obtaining good measurement results often goes hand in hand with extensive hardware effort. Especially for sensors used in the mass market, e.g. in automotive applications like adaptive cruise control this may be prohibitive. Therefore we address target localization using a compact frequency-modulated continuous-wave (FMCW) radar sensor. The angular measurement results are improved compared to standard beamforming methods using an adaptive beamforming approach. This approach will be applied to the FMCW principle in a way that allows the use of well known methods for the determination of other target parameters like range or velocity. The applicability of the developed theory will be shown on different measurement scenarios using a 24-GHz prototype radar system.",
"title": ""
},
{
"docid": "12dd3908cd443785d8e28c1f6d3d720e",
"text": "Deep neural networks have been widely used in numerous computer vision applications, particularly in face recognition. However, deploying deep neural network face recognition on mobile devices is still limited since most highaccuracy deep models are both time and GPU consumption in the inference stage. Therefore, developing a lightweight deep neural network is one of the most promising solutions to deploy face recognition on mobile devices. Such the lightweight deep neural network requires efficient memory with small number of weights representation and low cost operators. In this paper a novel deep neural network named MobiFace, which is simple but effective, is proposed for productively deploying face recognition on mobile devices. The experimental results have shown that our lightweight MobiFace is able to achieve high performance with 99.7% on LFW database and 91.3% on large-scale challenging Megaface database. It is also eventually competitive against large-scale deep-networks face recognition while significant reducing computational time and memory consumption.",
"title": ""
},
{
"docid": "ec641ace6df07156891f2bf40ea5d072",
"text": "This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks.",
"title": ""
},
{
"docid": "ebd40aaf7fa87beec30ceba483cc5047",
"text": "Event Detection (ED) aims to identify instances of specified types of events in text, which is a crucial component in the overall task of event extraction. The commonly used features consist of lexical, syntactic, and entity information, but the knowledge encoded in the Abstract Meaning Representation (AMR) has not been utilized in this task. AMR is a semantic formalism in which the meaning of a sentence is encoded as a rooted, directed, acyclic graph. In this paper, we demonstrate the effectiveness of AMR to capture and represent the deeper semantic contexts of the trigger words in this task. Experimental results further show that adding AMR features on top of the traditional features can achieve 67.8% (with 2.1% absolute improvement) F-measure (F1), which is comparable to the state-of-the-art approaches.",
"title": ""
},
{
"docid": "bf62cf6deb1b11816fa271bfecde1077",
"text": "EASL–EORTC Clinical Practice Guidelines (CPG) on the management of hepatocellular carcinoma (HCC) define the use of surveillance, diagnosis, and therapeutic strategies recommended for patients with this type of cancer. This is the first European joint effort by the European Association for the Study of the Liver (EASL) and the European Organization for Research and Treatment of Cancer (EORTC) to provide common guidelines for the management of hepatocellular carcinoma. These guidelines update the recommendations reported by the EASL panel of experts in HCC published in 2001 [1]. Several clinical and scientific advances have occurred during the past decade and, thus, a modern version of the document is urgently needed. The purpose of this document is to assist physicians, patients, health-care providers, and health-policy makers from Europe and worldwide in the decision-making process according to evidencebased data. Users of these guidelines should be aware that the recommendations are intended to guide clinical practice in circumstances where all possible resources and therapies are available. Thus, they should adapt the recommendations to their local regulations and/or team capacities, infrastructure, and cost– benefit strategies. Finally, this document sets out some recommendations that should be instrumental in advancing the research and knowledge of this disease and ultimately contribute to improve patient care. The EASL–EORTC CPG on the management of hepatocellular carcinoma provide recommendations based on the level of evi-",
"title": ""
},
{
"docid": "a8c3c2aad1853e5322d62bf0f688d358",
"text": "Individual decision-making forms the basis for nearly all of microeconomic analysis. These notes outline the standard economic model of rational choice in decisionmaking. In the standard view, rational choice is defined to mean the process of determining what options are available and then choosing the most preferred one according to some consistent criterion. In a certain sense, this rational choice model is already an optimization-based approach. We will find that by adding one empirically unrestrictive assumption, the problem of rational choice can be represented as one of maximizing a real-valued utility function. The utility maximization approach grew out of a remarkable intellectual convergence that began during the 19th century. On one hand, utilitarian philosophers were seeking an objective criterion for a science of government. If policies were to be decided based on attaining the “greatest good for the greatest number,” they would need to find a utility index that could measure of how beneficial different policies were to different people. On the other hand, thinkers following Adam Smith were trying to refine his ideas about how an economic system based on individual self-interest would work. Perhaps that project, too, could be advanced by developing an index of self-interest, assessing how beneficial various outcomes ∗These notes are an evolving, collaborative product. The first version was by Antonio Rangel in Fall 2000. Those original notes were edited and expanded by Jon Levin in Fall 2001 and 2004, and by Paul Milgrom in Fall 2002 and 2003.",
"title": ""
},
{
"docid": "ede8b89c37c10313a84ce0d0d21af8fc",
"text": "The adaptive fuzzy and fuzzy neural models are being widely used for identification of dynamic systems. This paper describes different fuzzy logic and neural fuzzy models. The robustness of models has further been checked by Simulink implementation of the models with application to the problem of system identification. The approach is to identify the system by minimizing the cost function using parameters update.",
"title": ""
},
{
"docid": "adfe1398a35e63b0bfbf2fd55e7a9d81",
"text": "Neutrosophic numbers easily allow modeling uncertainties of prices universe, thus justifying the growing interest for theoretical and practical aspects of arithmetic generated by some special numbers in our work. At the beginning of this paper, we reconsider the importance in applied research of instrumental discernment, viewed as the main support of the final measurement validity. Theoretically, the need for discernment is revealed by decision logic, and more recently by the new neutrosophic logic and by constructing neutrosophic-type index numbers, exemplified in the context and applied to the world of prices, and, from a practical standpoint, by the possibility to use index numbers in characterization of some cyclical phenomena and economic processes, e.g. inflation rate. The neutrosophic index numbers or neutrosophic indexes are the key topic of this article. The next step is an interrogative and applicative one, drawing the coordinates of an optimized discernment centered on neutrosophic-type index numbers. The inevitable conclusions are optimistic in relation to the common future of the index method and neutrosophic logic, with statistical and economic meaning and utility.",
"title": ""
},
{
"docid": "17806963c91f6d6981f1dcebf3880927",
"text": "The ability to assess the reputation of a member in a web community is a need addressed in many different ways according to the many different stages in which the nature of communities has evolved over time. In the case of reputation of goods/services suppliers, the solutions available to prevent the feedback abuse are generally reliable but centralized under the control of few big Internet companies. In this paper we show how a decentralized and distributed feedback management system can be built on top of the Bitcoin blockchain.",
"title": ""
}
] |
scidocsrr
|
93a44a6ab7dd1262366bd2f3081f0595
|
RIOT : One OS to Rule Them All in the IoT
|
[
{
"docid": "5fd6462e402e3a3ab1e390243d80f737",
"text": "We present TinyOS, a flexible, application-specific operating system for sensor networks. Sensor networks consist of (potentially) thousands of tiny, low-power nodes, each of which execute concurrent, reactive programs that must operate with severe memory and power constraints. The sensor network challenges of limited resources, event-centric concurrent applications, and low-power operation drive the design of TinyOS. Our solution combines flexible, fine-grain components with an execution model that supports complex yet safe concurrent operations. TinyOS meets these challenges well and has become the platform of choice for sensor network research; it is in use by over a hundred groups worldwide, and supports a broad range of applications and research topics. We provide a qualitative and quantitative evaluation of the system, showing that it supports complex, concurrent programs with very low memory requirements (many applications fit within 16KB of memory, and the core OS is 400 bytes) and efficient, low-power operation. We present our experiences with TinyOS as a platform for sensor network innovation and applications.",
"title": ""
},
{
"docid": "cf54c485a54d9b22d06710684061eac2",
"text": "Many threads packages have been proposed for programming wireless sensor platforms. However, many sensor network operating systems still choose to provide an event-driven model, due to efficiency concerns. We present TOS-Threads, a threads package for TinyOS that combines the ease of a threaded programming model with the efficiency of an event-based kernel. TOSThreads is backwards compatible with existing TinyOS code, supports an evolvable, thread-safe kernel API, and enables flexible application development through dynamic linking and loading. In TOS-Threads, TinyOS code runs at a higher priority than application threads and all kernel operations are invoked only via message passing, never directly, ensuring thread-safety while enabling maximal concurrency. The TOSThreads package is non-invasive; it does not require any large-scale changes to existing TinyOS code.\n We demonstrate that TOSThreads context switches and system calls introduce an overhead of less than 0.92% and that dynamic linking and loading takes as little as 90 ms for a representative sensing application. We compare different programming models built using TOSThreads, including standard C with blocking system calls and a reimplementation of Tenet. Additionally, we demonstrate that TOSThreads is able to run computationally intensive tasks without adversely affecting the timing of critical OS services.",
"title": ""
}
] |
[
{
"docid": "235e6e4537e9f336bf80e6d648fdc8fb",
"text": "Communication between the deaf and non-deaf has always been a very cumbersome task. This paper aims to cover the various prevailing methods of deaf-mute communication interpreter system. The two broad classification of the communication methodologies used by the deaf –mute people are Wearable Communication Device and Online Learning System. Under Wearable communication method, there are Glove based system, Keypad method and Handicom Touchscreen. All the above mentioned three sub-divided methods make use of various sensors, accelerometer, a suitable microcontroller, a text to speech conversion module, a keypad and a touch-screen. The need for an external device to interpret the message between a deaf –mute and non-deaf-mute people can be overcome by the second method i.e online learning system. The Online Learning System has different methods under it, five of which are explained in this paper. The five sub-divided methods areSLIM module, TESSA, Wi-See Technology, SWI_PELE System and Web-Sign Technology. The working of the individual components used and the operation of the whole system for the communication purpose has been explained in detail in this paper.",
"title": ""
},
{
"docid": "6daa93f2a7cfaaa047ecdc04fb802479",
"text": "Facial landmark localization is important to many facial recognition and analysis tasks, such as face attributes analysis, head pose estimation, 3D face modelling, and facial expression analysis. In this paper, we propose a new approach to localizing landmarks in facial image by deep convolutional neural network (DCNN). We make two enhancements on the CNN to adapt it to the feature localization task as follows. Firstly, we replace the commonly used max pooling by depth-wise convolution to obtain better localization performance. Secondly, we define a response map for each facial points as a 2D probability map indicating the presence likelihood, and train our model with a KL divergence loss. To obtain robust localization results, our approach first takes the expectations of the response maps of Enhanced CNN and then applies auto-encoder model to the global shape vector, which is effective to rectify the outlier points by the prior global landmark configurations. The proposed ECNN method achieves 5.32% mean error on the experiments on the 300-W dataset, which is comparable to the state-of-the-art performance on this standard benchmark, showing the effectiveness of our methods.",
"title": ""
},
{
"docid": "4c67d3686008e377220314323a35eecb",
"text": "Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.",
"title": ""
},
{
"docid": "8272f6d511cc8aa104ba10c23deb17a5",
"text": "The challenge of developing facial recognition systems has been the focus of many research efforts in recent years and has numerous applications in areas such as security, entertainment, and biometrics. Recently, most progress in this field has come from training very deep neural networks on massive datasets which is computationally intensive and time consuming. Here, we propose a deep transfer learning (DTL) approach that integrates transfer learning techniques and convolutional neural networks and apply it to the problem of facial recognition to fine-tune facial recognition models. Transfer learning can allow for the training of robust, high-performance machine learning models that require much less time and resources to produce than similarly performing models that have been trained from scratch. Using a pre-trained face recognition model, we were able to perform transfer learning to produce a network that is capable of making accurate predictions on much smaller datasets. We also compare our results with results produced by a selection of classical algorithms on the same datasets to demonstrate the effectiveness of the proposed DTL approach.",
"title": ""
},
{
"docid": "ad584a07befbfff1dff36c18ea830a4e",
"text": "In this paper, we review some of the novel emerging memory technologies and how they can enable energy-efficient implementation of large neuromorphic computing systems. We will highlight some of the key aspects of biological computation that are being mimicked in these novel nanoscale devices, and discuss various strategies employed to implement them efficiently. Though large scale learning systems have not been implemented using these devices yet, we will discuss the ideal specifications and metrics to be satisfied by these devices based on theoretical estimations and simulations. We also outline the emerging trends and challenges in the path towards successful implementations of large learning systems that could be ubiquitously deployed for a wide variety of cognitive computing tasks.",
"title": ""
},
{
"docid": "116ab901f60a7282f8a2ea245c59b679",
"text": "Image classification is a vital technology many people in all arenas of human life utilize. It is pervasive in every facet of the social, economic, and corporate spheres of influence, worldwide. This need for more accurate, detail-oriented classification increases the need for modifications, adaptations, and innovations to Deep learning algorithms. This paper uses Convolutional Neural Networks (CNN) to classify handwritten digits in the MNIST database, and scenes in the CIFAR-10 database. Our proposed method preprocesses the data in the wavelet domain to attain greater accuracy and comparable efficiency to the spatial domain processing. By separating the image into different subbands, important feature learning occurs over varying low to high frequencies. The fusion of the learned low and high frequency features, and processing the combined feature mapping results in an increase in the detection accuracy. Comparing the proposed methods to spatial domain CNN and Stacked Denoising Autoencoder (SDA), experimental findings reveal a substantial increase in accuracy.",
"title": ""
},
{
"docid": "a52a90bb69f303c4a31e4f24daf609e6",
"text": "The effects of Arctium lappa L. (root) on anti-inflammatory and free radical scavenger activity were investigated. Subcutaneous administration of A. lappa crude extract significantly decreased carrageenan-induced rat paw edema. When simultaneously treated with CCl4, it produced pronounced activities against CCl4-induced acute liver damage. The free radical scavenging activity of its crude extract was also examined by means of an electron spin resonance (ESR) spectrometer. The IC50 of A. lappa extract on superoxide and hydroxyl radical scavenger activity was 2.06 mg/ml and 11.8 mg/ml, respectively. These findings suggest that Arctium lappa possess free radical scavenging activity. The inhibitory effects on carrageenan-induced paw edema and CCl4-induced hepatotoxicity could be due to the scavenging effect of A. lappa.",
"title": ""
},
{
"docid": "dacf2f44c3f8fc0931dceda7e4cb9bef",
"text": "Brain-computer interaction has already moved from assistive care to applications such as gaming. Improvements in usability, hardware, signal processing, and system integration should yield applications in other nonmedical areas.",
"title": ""
},
{
"docid": "a9709367bc84ececd98f65ed7359f6b0",
"text": "Though many tools are available to help programmers working on change tasks, and several studies have been conducted to understand how programmers comprehend systems, little is known about the specific kinds of questions programmers ask when evolving a code base. To fill this gap we conducted two qualitative studies of programmers performing change tasks to medium to large sized programs. One study involved newcomers working on assigned change tasks to a medium-sized code base. The other study involved industrial programmers working on their own change tasks on code with which they had experience. The focus of our analysis has been on what information a programmer needs to know about a code base while performing a change task and also on howthey go about discovering that information. Based on this analysis we catalog and categorize 44 different kinds of questions asked by our participants. We also describe important context for how those questions were answered by our participants, including their use of tools.",
"title": ""
},
{
"docid": "9f68df51d0d47b539a6c42207536d012",
"text": "Schizophrenia-spectrum risk alleles may persist in the population, despite their reproductive costs in individuals with schizophrenia, through the possible creativity benefits of mild schizotypy in non-psychotic relatives. To assess this creativity-benefit model, we measured creativity (using 6 verbal and 8 drawing tasks), schizotypy, Big Five personality traits, and general intelligence in 225 University of New Mexico students. Multiple regression analyses showed that openness and intelligence, but not schizotypy, predicted reliable observer ratings of verbal and drawing creativity. Thus, the 'madness-creativity' link seems mediated by the personality trait of openness, and standard creativity-benefit models seem unlikely to explain schizophrenia's evolutionary persistence.",
"title": ""
},
{
"docid": "5591247b2e28f436da302757d3f82122",
"text": "This paper proposes LPRNet end-to-end method for Automatic License Plate Recognition without preliminary character segmentation. Our approach is inspired by recent breakthroughs in Deep Neural Networks, and works in real-time with recognition accuracy up to 95% for Chinese license plates: 3 ms/plate on nVIDIA R © GeForceTMGTX 1080 and 1.3 ms/plate on Intel R © CoreTMi7-6700K CPU. LPRNet consists of the lightweight Convolutional Neural Network, so it can be trained in end-to-end way. To the best of our knowledge, LPRNet is the first real-time License Plate Recognition system that does not use RNNs. As a result, the LPRNet algorithm may be used to create embedded solutions for LPR that feature high level accuracy even on challenging Chinese license plates.",
"title": ""
},
{
"docid": "d51ef75ccf464cc03656210ec500db44",
"text": "The choice of a business process modelling (BPM) tool in combination with the selection of a modelling language is one of the crucial steps in BPM project preparation. Different aspects influence the decision: tool functionality, price, modelling language support, etc. In this paper we discuss the aspect of usability, which has already been recognized as an important topic in software engineering and web design. We conduct a literature review to find out the current state of research on the usability in the BPM field. The results of the literature review show, that although a number of research papers mention the importance of usability for BPM tools, real usability evaluation studies have rarely been undertaken. Based on the results of the literature analysis, the possible research directions in the field of usability of BPM tools are suggested.",
"title": ""
},
{
"docid": "67a3f92ab8c5a6379a30158bb9905276",
"text": "We present a compendium of recent and current projects that utilize crowdsourcing technologies for language studies, finding that the quality is comparable to controlled laboratory experiments, and in some cases superior. While crowdsourcing has primarily been used for annotation in recent language studies, the results here demonstrate that far richer data may be generated in a range of linguistic disciplines from semantics to psycholinguistics. For these, we report a number of successful methods for evaluating data quality in the absence of a ‘correct’ response for any given data point.",
"title": ""
},
{
"docid": "c24bfd3b7bbc8222f253b004b522f7d5",
"text": "The Audio/Visual Emotion Challenge and Workshop (AVEC 2017) \"Real-life depression, and affect\" will be the seventh competition event aimed at comparison of multimedia processing and machine learning methods for automatic audiovisual depression and emotion analysis, with all participants competing under strictly the same conditions. The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the depression and emotion recognition communities, as well as the audiovisual processing communities, to compare the relative merits of the various approaches to depression and emotion recognition from real-life data. This paper presents the novelties introduced this year, the challenge guidelines, the data used, and the performance of the baseline system on the two proposed tasks: dimensional emotion recognition (time and value-continuous), and dimensional depression estimation (value-continuous).",
"title": ""
},
{
"docid": "6d3dc0ea95cd8626aff6938748b58d1a",
"text": "Mango cultivation methods being adopted currently are ineffective and low productive despite consuming huge man power. Advancements in robust unmanned aerial vehicles (UAV's), high speed image processing algorithms and machine vision techniques, reinforce the possibility of transforming agricultural scenario to modernity within prevailing time and energy constraints. Present paper introduces Agricultural Aid for Mango cutting (AAM), an Agribot that could be employed for precision mango farming. It is a quadcopter empowered with vision and cutter systems complemented with necessary ancillaries. It could hover around the trees, detect the ripe mangoes, cut and collect them. Paper also sheds light on the available Agribots that have mostly been limited to the research labs. AAM robot is the first of its kind that once implemented could pave way to the next generation Agribots capable of increasing the agricultural productivity and justify the existence of intelligent machines.",
"title": ""
},
{
"docid": "ec31c653457389bd587b26f4427c90d7",
"text": "Segmentation and classification of urban range data into different object classes have several challenges due to certain properties of the data, such as density variation, inconsistencies due to missing data and the large data size that require heavy computation and large memory. A method to classify urban scenes based on a super-voxel segmentation of sparse 3D data obtained from LiDAR sensors is presented. The 3D point cloud is first segmented into voxels, which are then characterized by several attributes transforming them into super-voxels. These are joined together by using a link-chain method rather than the usual region growing algorithm to create objects. These objects are then classified using geometrical models and local descriptors. In order to evaluate the results, a new metric that combines both segmentation and classification results simultaneously is presented. The effects of voxel size and incorporation of RGB color and laser reflectance intensity on the classification results are also discussed. The method is evaluated on standard data sets using different metrics to demonstrate its efficacy.",
"title": ""
},
{
"docid": "f65c027ab5baa981667955cc300d2f34",
"text": "In-band full-duplex (FD) wireless communication, i.e. simultaneous transmission and reception at the same frequency, in the same channel, promises up to 2x spectral efficiency, along with advantages in higher network layers [1]. the main challenge is dealing with strong in-band leakage from the transmitter to the receiver (i.e. self-interference (SI)), as TX powers are typically >100dB stronger than the weakest signal to be received, necessitating TX-RX isolation and SI cancellation. Performing this SI-cancellation solely in the digital domain, if at all possible, would require extremely clean (low-EVM) transmission and a huge dynamic range in the RX and ADC, which is currently not feasible [2]. Cancelling SI entirely in analog is not feasible either, since the SI contains delayed TX components reflected by the environment. Cancelling these requires impractically large amounts of tunable analog delay. Hence, FD-solutions proposed thus far combine SI-rejection at RF, analog BB, digital BB and cross-domain.",
"title": ""
},
{
"docid": "5ca36b7877ebd3d05e48d3230f2dceb0",
"text": "BACKGROUND\nThe frontal branch has a defined course along the Pitanguy line from tragus to lateral brow, although its depth along this line is controversial. The high-superficial musculoaponeurotic system (SMAS) face-lift technique divides the SMAS above the arch, which conflicts with previous descriptions of the frontal nerve depth. This anatomical study defines the depth and fascial boundaries of the frontal branch of the facial nerve over the zygomatic arch.\n\n\nMETHODS\nEight fresh cadaver heads were included in the study, with bilateral facial nerves studied (n = 16). The proximal frontal branches were isolated and then sectioned in full-thickness tissue blocks over a 5-cm distance over the zygomatic arch. The tissue blocks were evaluated histologically for the depth and fascial planes surrounding the frontal nerve. A dissection video accompanies this article.\n\n\nRESULTS\nThe frontal branch of the facial nerve was identified in each tissue section and its fascial boundaries were easily identified using epidermis and periosteum as reference points. The frontal branch coursed under a separate fascial plane, the parotid-temporal fascia, which was deep to the SMAS as it coursed to the zygomatic arch and remained within this deep fascia over the arch. The frontal branch was intact and protected by the parotid-temporal fascia after a high-SMAS face lift.\n\n\nCONCLUSIONS\nThe frontal branch of the facial nerve is protected by a deep layer of fascia, termed the parotid-temporal fascia, which is separate from the SMAS as it travels over the zygomatic arch. Division of the SMAS above the arch in a high-SMAS face lift is safe using the technique described in this study.",
"title": ""
},
{
"docid": "acc26655abb2a181034db8571409d0a5",
"text": "In this paper, a new optimization approach is designed for convolutional neural network (CNN) which introduces explicit logical relations between filters in the convolutional layer. In a conventional CNN, the filters’ weights in convolutional layers are separately trained by their own residual errors, and the relations of these filters are not explored for learning. Different from the traditional learning mechanism, the proposed correlative filters (CFs) are initiated and trained jointly in accordance with predefined correlations, which are efficient to work cooperatively and finally make a more generalized optical system. The improvement in CNN performance with the proposed CF is verified on five benchmark image classification datasets, including CIFAR-10, CIFAR-100, MNIST, STL-10, and street view house number. The comparative experimental results demonstrate that the proposed approach outperforms a number of state-of-the-art CNN approaches.",
"title": ""
},
{
"docid": "941d7a7a59261fe2463f42cad9cff004",
"text": "Dragon's blood is one of the renowned traditional medicines used in different cultures of world. It has got several therapeutic uses: haemostatic, antidiarrhetic, antiulcer, antimicrobial, antiviral, wound healing, antitumor, anti-inflammatory, antioxidant, etc. Besides these medicinal applications, it is used as a coloring material, varnish and also has got applications in folk magic. These red saps and resins are derived from a number of disparate taxa. Despite its wide uses, little research has been done to know about its true source, quality control and clinical applications. In this review, we have tried to overview different sources of Dragon's blood, its source wise chemical constituents and therapeutic uses. As well as, a little attempt has been done to review the techniques used for its quality control and safety.",
"title": ""
}
] |
scidocsrr
|
2248259ddc80c4f4f4eb8b028affc8ef
|
ASTRO: A Datalog System for Advanced Stream Reasoning
|
[
{
"docid": "47ac4b546fe75f2556a879d6188d4440",
"text": "There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.",
"title": ""
},
{
"docid": "4b03aeb6c56cc25ce57282279756d1ff",
"text": "Weighted signed networks (WSNs) are networks in which edges are labeled with positive and negative weights. WSNs can capture like/dislike, trust/distrust, and other social relationships between people. In this paper, we consider the problem of predicting the weights of edges in such networks. We propose two novel measures of node behavior: the goodness of a node intuitively captures how much this node is liked/trusted by other nodes, while the fairness of a node captures how fair the node is in rating other nodes' likeability or trust level. We provide axioms that these two notions need to satisfy and show that past work does not meet these requirements for WSNs. We provide a mutually recursive definition of these two concepts and prove that they converge to a unique solution in linear time. We use the two measures to predict the edge weight in WSNs. Furthermore, we show that when compared against several individual algorithms from both the signed and unsigned social network literature, our fairness and goodness metrics almost always have the best predictive power. We then use these as features in different multiple regression models and show that we can predict edge weights on 2 Bitcoin WSNs, an Epinions WSN, 2 WSNs derived from Wikipedia, and a WSN derived from Twitter with more accurate results than past work. Moreover, fairness and goodness metrics form the most significant feature for prediction in most (but not all) cases.",
"title": ""
}
] |
[
{
"docid": "0a4a124589dffca733fa9fa87dc94b35",
"text": "where ri is the reward in cycle i of a given history, and the expected value is taken over all possible interaction histories of π and μ. The choice of γi is a subtle issue that controls how greedy or far sighted the agent should be. Here we use the near-harmonic γi := 1/i2 as this produces an agent with increasing farsightedness of the order of its current age [Hutter2004]. As we desire an extremely general definition of intelligence for arbitrary systems, our space of environments should be as large as possible. An obvious choice is the space of all probability measures, however this causes serious problems as we cannot even describe some of these measures in a finite way.",
"title": ""
},
{
"docid": "9d62bcab72472183be38ad67635d744d",
"text": "In most challenging data analysis applications, data evolve over time and must be analyzed in near real time. Patterns and relations in such data often evolve over time, thus, models built for analyzing such data quickly become obsolete over time. In machine learning and data mining this phenomenon is referred to as concept drift. The objective is to deploy models that would diagnose themselves and adapt to changing data over time. This chapter provides an application oriented view towards concept drift research, with a focus on supervised learning tasks. First we overview and categorize application tasks for which the problem of concept drift is particularly relevant. Then we construct a reference framework for positioning application tasks within a spectrum of problems related to concept drift. Finally, we discuss some promising research directions from the application perspective, and present recommendations for application driven concept drift research and development.",
"title": ""
},
{
"docid": "44ecfa6fb5c31abf3a035dea9e709d11",
"text": "The issue of the variant vs. invariant in personality often arises in diVerent forms of the “person– situation” debate, which is based on a false dichotomy between the personal and situational determination of behavior. Previously reported data are summarized that demonstrate how behavior can vary as a function of subtle situational changes while individual consistency is maintained. Further discussion considers the personal source of behavioral invariance, the situational source of behavioral variation, the person–situation interaction, the nature of behavior, and the “personality triad” of persons, situations, and behaviors, in which each element is understood and predicted in terms of the other two. An important goal for future research is further development of theories and methods for conceptualizing and measuring the functional aspects of situations and of behaviors. One reason for the persistence of the person situation debate may be that it serves as a proxy for a deeper, implicit debate over values such as equality vs. individuality, determinism vs. free will, and Xexibility vs. consistency. However, these value dichotomies may be as false as the person–situation debate that they implicitly drive. 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "d940c448ef854fd8c50bdf08a03cd008",
"text": "The Multi-task Cascaded Convolutional Networks (MTCNN) has recently demonstrated impressive results on jointly face detection and alignment. By using the hard sample ming and training a model on FER2013 datasets, we exploit the inherent correlation between face detection and facial express-ion recognition, and report the results of facial expression recognition based on MTCNN.",
"title": ""
},
{
"docid": "55767c008ad459f570fb6b99eea0b26d",
"text": "The Tor network relies on volunteer relay operators for relay bandwidth, which may limit its growth and scaling potential. We propose an incentive scheme for Tor relying on two novel concepts. We introduce TorCoin, an “altcoin” that uses the Bitcoin protocol to reward relays for contributing bandwidth. Relays “mine” TorCoins, then sell them for cash on any existing altcoin exchange. To verify that a given TorCoin represents actual bandwidth transferred, we introduce TorPath, a decentralized protocol for forming Tor circuits such that each circuit is privately-addressable but publicly verifiable. Each circuit’s participants may then collectively mine a limited number of TorCoins, in proportion to the end-to-end transmission goodput they measure on that circuit.",
"title": ""
},
{
"docid": "46adb7a040a2d8a40910a9f03825588d",
"text": "The aim of this study was to investigate the consequences of friend networking sites (e.g., Friendster, MySpace) for adolescents' self-esteem and well-being. We conducted a survey among 881 adolescents (10-19-year-olds) who had an online profile on a Dutch friend networking site. Using structural equation modeling, we found that the frequency with which adolescents used the site had an indirect effect on their social self-esteem and well-being. The use of the friend networking site stimulated the number of relationships formed on the site, the frequency with which adolescents received feedback on their profiles, and the tone (i.e., positive vs. negative) of this feedback. Positive feedback on the profiles enhanced adolescents' social self-esteem and well-being, whereas negative feedback decreased their self-esteem and well-being.",
"title": ""
},
{
"docid": "b98c34a4be7f86fb9506a6b1620b5d3e",
"text": "A portable civilian GPS spoofer is implemented on a digital signal processor and used to characterize spoofing effects and develop defenses against civilian spoofing. This work is intended to equip GNSS users and receiver manufacturers with authentication methods that are effective against unsophisticated spoofing attacks. The work also serves to refine the civilian spoofing threat assessment by demonstrating the challenges involved in mounting a spoofing attack.",
"title": ""
},
{
"docid": "0caf6ead2548d556d9dd0564fb59d8cc",
"text": "A modified microstrip Franklin array antenna (MFAA) is proposed for short-range radar applications in vehicle blind spot information systems (BLIS). It is shown that the radiating performance [i.e., absolute gain value and half-power beamwidth (HPBW) angle] can be improved by increasing the number of radiators in the MFAA structure and assigning appropriate values to the antenna geometry parameters. The MFAA possesses a high absolute gain value (>10 dB), good directivity (HPBW <20°) in the E-plane, and large-range coverage (HPBW >80°) in the H-plane at an operating frequency of 24 GHz. Moreover, the 10-dB impedance bandwidth of the proposed antenna is around 250 MHz. The MFAA is, thus, an ideal candidate for automotive BLIS applications.",
"title": ""
},
{
"docid": "edc89ba0554d32297a9aab7103c4abb9",
"text": "During the last years, agile methods like eXtreme Programming have become increasingly popular. Parallel to this, more and more organizations rely on process maturity models to assess and improve their own processes or those of suppliers, since it has been getting clear that most project failures can be imputed to inconsistent, undisciplined processes. Many organizations demand CMMI compliance of projects where agile methods are employed. In this situation it is necessary to analyze the interrelations and mutual restrictions between agile methods and approaches for software process analysis and improvement. This paper analyzes to what extent the CMMI process areas can be covered by XP and where adjustments of XP have to be made. Based on this, we describe the limitations of CMMI in an agile environment and show that level 4 or 5 are not feasible under the current specifications of CMMI and XP.",
"title": ""
},
{
"docid": "7a7c358eaa5752d6984a56429f58c556",
"text": "If the training dataset is not very large, image recognition is usually implemented with the transfer learning methods. In these methods the features are extracted using a deep convolutional neural network, which was preliminarily trained with an external very-large dataset. In this paper we consider the nonparametric classification of extracted feature vectors with the probabilistic neural network (PNN). The number of neurons at the pattern layer of the PNN is equal to the database size, which causes the low recognition performance and high memory space complexity of this network. We propose to overcome these drawbacks by replacing the exponential activation function in the Gaussian Parzen kernel to the complex exponential functions in the Fej\\'er kernel. We demonstrate that in this case it is possible to implement the network with the number of neurons in the pattern layer proportional to the cubic root of the database size. Thus, the proposed modification of the PNN makes it possible to significantly decrease runtime and memory complexities without loosing its main advantages, namely, extremely fast training procedure and the convergence to the optimal Bayesian decision. An experimental study in visual object category classification and unconstrained face recognition with contemporary deep neural networks have shown, that our approach obtains very efficient and rather accurate decisions for the small training sample in comparison with the well-known classifiers.",
"title": ""
},
{
"docid": "343ed18e56e6f562fa509710e4cf8dc6",
"text": "The automated analysis of facial expressions has been widely used in different research areas, such as biometrics or emotional analysis. Special importance is attached to facial expressions in the area of sign language, since they help to form the grammatical structure of the language and allow for the creation of language disambiguation, and thus are called Grammatical Facial Expressions (GFEs). In this paper we outline the recognition of GFEs used in the Brazilian Sign Language. In order to reach this objective, we have captured nine types of GFEs using a KinectTMsensor, designed a spatial-temporal data representation, modeled the research question as a set of binary classification problems, and employed a Machine Learning technique.",
"title": ""
},
{
"docid": "5091c24d3ed56e45246fb444a66c290e",
"text": "This paper defines presence in terms of frames and involvement [1]. The value of this analysis of presence is demonstrated by applying it to several issues that have been raised about presence: residual awareness of nonmediation, imaginary presence, presence as categorical or continuum, and presence breaks. The paper goes on to explore the relationship between presence and reality. Goffman introduced frames to try to answer the question, “Under what circumstances do we think things real?” Under frame analysis there are three different conditions under which things are considered unreal, these are explained and related to the experience of presence. Frame analysis is used to show why virtual environments are not usually considered to be part of reality, although the virtual spaces of phone interaction are considered real. The analysis also yields practical suggestions for extending presence within virtual environments. Keywords--presence, frames, virtual environments, mobile phones, Goffman.",
"title": ""
},
{
"docid": "aad7697ce9d9af2b49cd3a46e441ef8e",
"text": "Soft pneumatic actuators (SPAs) are versatile robotic components enabling diverse and complex soft robot hardware design. However, due to inherent material characteristics exhibited by their primary constitutive material, silicone rubber, they often lack robustness and repeatability in performance. In this article, we present a novel SPA-based bending module design with shell reinforcement. The bidirectional soft actuator presented here is enveloped in a Yoshimura patterned origami shell, which acts as an additional protection layer covering the SPA while providing specific bending resilience throughout the actuator’s range of motion. Mechanical tests are performed to characterize several shell folding patterns and their effect on the actuator performance. Details on design decisions and experimental results using the SPA with origami shell modules and performance analysis are presented; the performance of the bending module is significantly enhanced when reinforcement is provided by the shell. With the aid of the shell, the bending module is capable of sustaining higher inflation pressures, delivering larger blocked torques, and generating the targeted motion trajectory.",
"title": ""
},
{
"docid": "49d164ec845f6201f56e18a575ed9436",
"text": "This research explores a Natural Language Processing technique utilized for the automatic reduction of melodies: the Probabilistic Context-Free Grammar (PCFG). Automatic melodic reduction was previously explored by means of a probabilistic grammar [11] [1]. However, each of these methods used unsupervised learning to estimate the probabilities for the grammar rules, and thus a corpusbased evaluation was not performed. A dataset of analyses using the Generative Theory of Tonal Music (GTTM) exists [13], which contains 300 Western tonal melodies and their corresponding melodic reductions in tree format. In this work, supervised learning is used to train a PCFG for the task of melodic reduction, using the tree analyses provided by the GTTM dataset. The resulting model is evaluated on its ability to create accurate reduction trees, based on a node-by-node comparison with ground-truth trees. Multiple data representations are explored, and example output reductions are shown. Motivations for performing melodic reduction include melodic identification and similarity, efficient storage of melodies, automatic composition, variation matching, and automatic harmonic analysis.",
"title": ""
},
{
"docid": "41e9dac7301e00793c6e4891e07b53fa",
"text": "We present an intriguing property of visual data that we observe in our attempt to isolate the influence of data for learning a visual representation. We observe that we can get better performance than existing model by just conditioning the existing representation on a million unlabeled images without any extra knowledge. As a by-product of this study, we achieve results better than prior state-of-theart for surface normal estimation on NYU-v2 depth dataset, and improved results for semantic segmentation using a selfsupervised representation on PASCAL-VOC 2012 dataset.",
"title": ""
},
{
"docid": "67d317befd382c34c143ebfe806a3b55",
"text": "In this paper, we present a novel meta-feature generation method in the context of meta-learning, which is based on rules that compare the performance of individual base learners in a one-against-one manner. In addition to these new meta-features, we also introduce a new meta-learner called Approximate Ranking Tree Forests (ART Forests) that performs very competitively when compared with several state-of-the-art meta-learners. Our experimental results are based on a large collection of datasets and show that the proposed new techniques can improve the overall performance of meta-learning for algorithm ranking significantly. A key point in our approach is that each performance figure of any base learner for any specific dataset is generated by optimising the parameters of the base learner separately for each dataset.",
"title": ""
},
{
"docid": "08134d0d76acf866a71d660062f2aeb8",
"text": "Colorization methods using deep neural networks have become a recent trend. However, most of them do not allow user inputs, or only allow limited user inputs (only global inputs or only local inputs), to control the output colorful images. The possible reason is that it’s difficult to differentiate the influence of different kind of user inputs in network training. To solve this problem, we present a novel deep colorization method, which allows simultaneous global and local inputs to better control the output colorized images. The key step is to design an appropriate loss function that can differentiate the influence of input data, global inputs and local inputs. With this design, our method accepts no inputs, or global inputs, or local inputs, or both global and local inputs, which is not supported in previous deep colorization methods. In addition, we propose a global color theme recommendation system to help users determine global inputs. Experimental results shows that our methods can better control the colorized images and generate state-of-art results.",
"title": ""
},
{
"docid": "ed3b8bfdd6048e4a07ee988f1e35fd21",
"text": "Accurate and automatic organ segmentation from 3D radiological scans is an important yet challenging problem for medical image analysis. Specifically, as a small, soft, and flexible abdominal organ, the pancreas demonstrates very high inter-patient anatomical variability in both its shape and volume. This inhibits traditional automated segmentation methods from achieving high accuracies, especially compared to the performance obtained for other organs, such as the liver, heart or kidneys. To fill this gap, we present an automated system from 3D computed tomography (CT) volumes that is based on a two-stage cascaded approach-pancreas localization and pancreas segmentation. For the first step, we localize the pancreas from the entire 3D CT scan, providing a reliable bounding box for the more refined segmentation step. We introduce a fully deep-learning approach, based on an efficient application of holistically-nested convolutional networks (HNNs) on the three orthogonal axial, sagittal, and coronal views. The resulting HNN per-pixel probability maps are then fused using pooling to reliably produce a 3D bounding box of the pancreas that maximizes the recall. We show that our introduced localizer compares favorably to both a conventional non-deep-learning method and a recent hybrid approach based on spatial aggregation of superpixels using random forest classification. The second, segmentation, phase operates within the computed bounding box and integrates semantic mid-level cues of deeply-learned organ interior and boundary maps, obtained by two additional and separate realizations of HNNs. By integrating these two mid-level cues, our method is capable of generating boundary-preserving pixel-wise class label maps that result in the final pancreas segmentation. Quantitative evaluation is performed on a publicly available dataset of 82 patient CT scans using 4-fold cross-validation (CV). We achieve a (mean ± std. dev.) Dice similarity coefficient (DSC) of 81.27 ± 6.27% in validation, which significantly outperforms both a previous state-of-the art method and a preliminary version of this work that report DSCs of 71.80 ± 10.70% and 78.01 ± 8.20%, respectively, using the same dataset.",
"title": ""
},
{
"docid": "23832f031f7c700f741843e54ff81b4e",
"text": "Data Mining in medicine is an emerging field of great importance to provide a prognosis and deeper understanding of disease classification, specifically in Mental Health areas. The main objective of this paper is to present a review of the existing research works in the literature, referring to the techniques and algorithms of Data Mining in Mental Health, specifically in the most prevalent diseases such as: Dementia, Alzheimer, Schizophrenia and Depression. Academic databases that were used to perform the searches are Google Scholar, IEEE Xplore, PubMed, Science Direct, Scopus and Web of Science, taking into account as date of publication the last 10 years, from 2008 to the present. Several search criteria were established such as ‘techniques’ AND ‘Data Mining’ AND ‘Mental Health’, ‘algorithms’ AND ‘Data Mining’ AND ‘dementia’ AND ‘schizophrenia’ AND ‘depression’, etc. selecting the papers of greatest interest. A total of 211 articles were found related to techniques and algorithms of Data Mining applied to the main Mental Health diseases. 72 articles have been identified as relevant works of which 32% are Alzheimer’s, 22% dementia, 24% depression, 14% schizophrenia and 8% bipolar disorders. Many of the papers show the prediction of risk factors in these diseases. From the review of the research articles analyzed, it can be said that use of Data Mining techniques applied to diseases such as dementia, schizophrenia, depression, etc. can be of great help to the clinical decision, diagnosis prediction and improve the patient’s quality of life.",
"title": ""
}
] |
scidocsrr
|
63867bb9be12855f80ba17d341d9aa3c
|
Tree Edit Distance Learning via Adaptive Symbol Embeddings
|
[
{
"docid": "b16dfd7a36069ed7df12f088c44922c5",
"text": "This paper considers the problem of computing the editing distance between unordered, labeled trees. We give efficient polynomial-time algorithms for the case when one tree is a string or has a bounded number of leaves. By contrast, we show that the problem is NP -complete even for binary trees having a label alphabet of size two. keywords: Computational Complexity, Unordered trees, NP -completeness.",
"title": ""
},
{
"docid": "c75095680818ccc7094e4d53815ef475",
"text": "We propose a new learning method, \"Generalized Learning Vector Quantization (GLVQ),\" in which reference vectors are updated based on the steepest descent method in order to minimize the cost function . The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and thus degrades recognition ability. Experimental results for printed Chinese character recognition reveal that GLVQ is superior to LVQ in recognition ability.",
"title": ""
}
] |
[
{
"docid": "7d3f0c22674ac3febe309c2440ad3d90",
"text": "MAC address randomization is a common privacy protection measure deployed in major operating systems today. It is used to prevent user-tracking with probe requests that are transmitted during IEEE 802.11 network scans. We present an attack to defeat MAC address randomization through observation of the timings of the network scans with an off-the-shelf Wi-Fi interface. This attack relies on a signature based on inter-frame arrival times of probe requests, which is used to group together frames coming from the same device although they use distinct MAC addresses. We propose several distance metrics based on timing and use them together with an incremental learning algorithm in order to group frames. We show that these signatures are consistent over time and can be used as a pseudo-identifier to track devices. Our framework is able to correctly group frames using different MAC addresses but belonging to the same device in up to 75% of the cases. These results show that the timing of 802.11 probe frames can be abused to track individual devices and that address randomization alone is not always enough to protect users against tracking.",
"title": ""
},
{
"docid": "4e95abd0786147e5e9f4195f2c7a8ff7",
"text": "The contour tree is an abstraction of a scalar field that encodes the nesting relationships of isosurfaces. We show how to use the contour tree to represent individual contours of a scalar field, how to simplify both the contour tree and the topology of the scalar field, how to compute and store geometric properties for all possible contours in the contour tree, and how to use the simplified contour tree as an interface for exploratory visualization.",
"title": ""
},
{
"docid": "e8792ced13f1be61d031e2b150cc5cf6",
"text": "Scientific literature cites a wide range of values for caffeine content in food products. The authors suggest the following standard values for the United States: coffee (5 oz) 85 mg for ground roasted coffee, 60 mg for instant and 3 mg for decaffeinated; tea (5 oz): 30 mg for leaf/bag and 20 mg for instant; colas: 18 mg/6 oz serving; cocoa/hot chocolate: 4 mg/5 oz; chocolate milk: 4 mg/6 oz; chocolate candy: 1.5-6.0 mg/oz. Some products from the United Kingdom and Denmark have higher caffeine content. Caffeine consumption survey data are limited. Based on product usage and available consumption data, the authors suggest a mean daily caffeine intake for US consumers of 4 mg/kg. Among children younger than 18 years of age who are consumers of caffeine-containing foods, the mean daily caffeine intake is about 1 mg/kg. Both adults and children in Denmark and UK have higher levels of caffeine intake.",
"title": ""
},
{
"docid": "32dd24b2c3bcc15dd285b2ffacc1ba43",
"text": "In this paper, we present for the first time the realization of a 77 GHz chip-to-rectangular waveguide transition realized in an embedded Wafer Level Ball Grid Array (eWLB) package. The chip is contacted with a coplanar waveguide (CPW). For the transformation of the transverse electromagnetic (TEM) mode of the CPW line to the transverse electric (TE) mode of the rectangular waveguide an insert is used in the eWLB package. This insert is based on radio-frequency (RF) printed circuit board (PCB) technology. Micro vias formed in the insert are used to realize the sidewalls of the rectangular waveguide structure in the fan-out area of the eWLB package. The redistribution layers (RDLs) on the top and bottom surface of the package form the top and bottom wall, respectively. We present two possible variants of transforming the TEM mode to the TE mode. The first variant uses a via realized in the rectangular waveguide structure. The second variant uses only the RDLs of the eWLB package for mode conversion. We present simulation and measurement results of both variants. We obtain an insertion loss of 1.5 dB and return loss better than 10 dB. The presented results show that this approach is an attractive candidate for future low loss and highly integrated RF systems.",
"title": ""
},
{
"docid": "3b61571cef1afad7a1aad2f9f8e586c4",
"text": "A practical limitation of deep neural networks is their high degree of specialization to a single task and visual domain. Recently, inspired by the successes of transfer learning, several authors have proposed to learn instead universal feature extractors that, used as the first stage of any deep network, work well for several tasks and domains simultaneously. Nevertheless, such universal features are still somewhat inferior to specialized networks. To overcome this limitation, in this paper we propose to consider instead universal parametric families of neural networks, which still contain specialized problem-specific models, but differing only by a small number of parameters. We study different designs for such parametrizations, including series and parallel residual adapters, joint adapter compression, and parameter allocations, and empirically identify the ones that yield the highest compression. We show that, in order to maximize performance, it is necessary to adapt both shallow and deep layers of a deep network, but the required changes are very small. We also show that these universal parametrization are very effective for transfer learning, where they outperform traditional fine-tuning techniques.",
"title": ""
},
{
"docid": "a916ebc65d96a9f0bee8edbe6c360d38",
"text": "Technology must work for human race and improve the way help reaches a person in distress in the shortest possible time. In a developing nation like India, with the advancement in the transportation technology and rise in the total number of vehicles, road accidents are increasing at an alarming rate. If an accident occurs, the victim's survival rate increases when you give immediate medical assistance. You can give medical assistance to an accident victim only when you know the exact location of the accident. This paper presents an inexpensive but intelligent framework that can identify and report an accident for two-wheelers. This paper targets two-wheelers because the mortality ratio is highest in two-wheeler accidents in India. This framework includes a microcontroller-based low-cost Accident Detection Unit (ADU) that contains a GPS positioning system and a GSM modem to sense and generate accidental events to a centralized server. The ADU calculates acceleration along with ground clearance of the vehicle to identify the accidental situation. On detecting an accident, ADU sends accident detection parameters, GPS coordinates, and the current time to the Accident Detection Server (ADS). ADS maintain information on the movement of the vehicle according to the historical data, current data, and the rules that you configure in the system. If an accident occurs, ADS notifies the emergency services and the preconfigured mobile numbers for the vehicle that contains this unit.",
"title": ""
},
{
"docid": "966d8f17b750117effa81a64a0dba524",
"text": "As the number of indoor location based services increase in the mobile market, service providers demanding more accurate indoor positioning information. To improve the location estimation accuracy, this paper presents an enhanced client centered location prediction scheme and filtering algorithm based on IEEE 802.11 ac standard for indoor positioning system. We proposed error minimization filtering algorithm for accurate location prediction and also introduce IEEE 802.11 ac mobile client centered location correction scheme without modification of preinstalled infrastructure Access Point. Performance evaluation based on MATLAB simulation highlight the enhanced location prediction performance. We observe a significant error reduction of angle estimation.",
"title": ""
},
{
"docid": "2b010823e217e64e8e56b835cef40a1a",
"text": "Software defect prediction, which predicts defective code regions, can help developers find bugs and prioritize their testing efforts. To build accurate prediction models, previous studies focus on manually designing features that encode the characteristics of programs and exploring different machine learning algorithms. Existing traditional features often fail to capture the semantic differences of programs, and such a capability is needed for building accurate prediction models.\n To bridge the gap between programs' semantics and defect prediction features, this paper proposes to leverage a powerful representation-learning algorithm, deep learning, to learn semantic representation of programs automatically from source code. Specifically, we leverage Deep Belief Network (DBN) to automatically learn semantic features from token vectors extracted from programs' Abstract Syntax Trees (ASTs).\n Our evaluation on ten open source projects shows that our automatically learned semantic features significantly improve both within-project defect prediction (WPDP) and cross-project defect prediction (CPDP) compared to traditional features. Our semantic features improve WPDP on average by 14.7% in precision, 11.5% in recall, and 14.2% in F1. For CPDP, our semantic features based approach outperforms the state-of-the-art technique TCA+ with traditional features by 8.9% in F1.",
"title": ""
},
{
"docid": "eaf3d25c7babb067e987b2586129e0e4",
"text": "Iterative refinement reduces the roundoff errors in the computed solution to a system of linear equations. Only one step requires higher precision arithmetic. If sufficiently high precision is used, the final result is shown to be very accurate.",
"title": ""
},
{
"docid": "3d9c05889446c52af89e799a8fd48098",
"text": "OBJECTIVE\nCompared to other eating disorders, anorexia nervosa (AN) has the highest rates of completed suicide whereas suicide attempt rates are similar or lower than in bulimia nervosa (BN). Attempted suicide is a key predictor of suicide, thus this mismatch is intriguing. We sought to explore whether the clinical characteristics of suicidal acts differ between suicide attempters with AN, BN or without an eating disorders (ED).\n\n\nMETHOD\nCase-control study in a cohort of suicide attempters (n = 1563). Forty-four patients with AN and 71 with BN were compared with 235 non-ED attempters matched for sex, age and education, using interview measures of suicidal intent and severity.\n\n\nRESULTS\nAN patients were more likely to have made a serious attempt (OR = 3.4, 95% CI 1.4-7.9), with a higher expectation of dying (OR = 3.7,95% CI 1.1-13.5), and an increased risk of severity (OR = 3.4,95% CI 1.2-9.6). BN patients did not differ from the control group. Clinical markers of the severity of ED were associated with the seriousness of the attempt.\n\n\nCONCLUSION\nThere are distinct features of suicide attempts in AN. This may explain the higher suicide rates in AN. Higher completed suicide rates in AN may be partially explained by AN patients' higher desire to die and their more severe and lethal attempts.",
"title": ""
},
{
"docid": "e1d6ae23f76674750573d2b9ac34f5ec",
"text": "In this work, we provide a new formulation for Graph Convolutional Neural Networks (GCNNs) for link prediction on graph data that addresses common challenges for biomedical knowledge graphs (KGs). We introduce a regularized attention mechanism to GCNNs that not only improves performance on clean datasets, but also favorably accommodates noise in KGs, a pervasive issue in realworld applications. Further, we explore new visualization methods for interpretable modelling and to illustrate how the learned representation can be exploited to automate dataset denoising. The results are demonstrated on a synthetic dataset, the common benchmark dataset FB15k-237, and a large biomedical knowledge graph derived from a combination of noisy and clean data sources. Using these improvements, we visualize a learned model’s representation of the disease cystic fibrosis and demonstrate how to interrogate a neural network to show the potential of PPARG as a candidate therapeutic target for rheumatoid arthritis.",
"title": ""
},
{
"docid": "796ce0465e28fa07ebec1e4845073539",
"text": "In multiple attribute decision analysis (MADA), one often needs to deal with both numerical data and qualitative information with uncertainty. It is essential to properly represent and use uncertain information to conduct rational decision analysis. Based on a multilevel evaluation framework, an evidential reasoning (ER) approach has been developed for supporting such decision analysis, the kernel of which is an ER algorithm developed on the basis of the framework and the evidence combination rule of the Dempster–Shafer (D–S) theory. The approach has been applied to engineering design selection, organizational self-assessment, safety and risk assessment, and supplier assessment. In this paper, the fundamental features of the ER approach are investigated. New schemes for weight normalization and basic probability assignments are proposed. The original ER approach is further developed to enhance the process of aggregating attributes with uncertainty. Utility intervals are proposed to describe the impact of ignorance on decision analysis. Several properties of the new ER approach are explored, which lay the theoretical foundation of the ER approach. A numerical example of a motorcycle evaluation problem is examined using the ER approach. Computation steps and analysis results are provided in order to demonstrate its implementation process.",
"title": ""
},
{
"docid": "d04042c81f2c2f7f762025e6b2bd9ab8",
"text": "AIMS AND OBJECTIVES\nTo examine the association between trait emotional intelligence and learning strategies and their influence on academic performance among first-year accelerated nursing students.\n\n\nDESIGN\nThe study used a prospective survey design.\n\n\nMETHODS\nA sample size of 81 students (100% response rate) who undertook the accelerated nursing course at a large university in Sydney participated in the study. Emotional intelligence was measured using the adapted version of the 144-item Trait Emotional Intelligence Questionnaire. Four subscales of the Motivated Strategies for Learning Questionnaire were used to measure extrinsic goal motivation, peer learning, help seeking and critical thinking among the students. The grade point average score obtained at the end of six months was used to measure academic achievement.\n\n\nRESULTS\nThe results demonstrated a statistically significant correlation between emotional intelligence scores and critical thinking (r = 0.41; p < 0.001), help seeking (r = 0.33; p < 0.003) and peer learning (r = 0.32; p < 0.004) but not with extrinsic goal orientation (r = -0.05; p < 0.677). Emotional intelligence emerged as a significant predictor of academic achievement (β = 0.25; p = 0.023).\n\n\nCONCLUSION\nIn addition to their learning styles, higher levels of awareness and understanding of their own emotions have a positive impact on students' academic achievement. Higher emotional intelligence may lead students to pursue their interests more vigorously and think more expansively about subjects of interest, which could be an explanatory factor for higher academic performance in this group of nursing students.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nThe concepts of emotional intelligence are central to clinical practice as nurses need to know how to deal with their own emotions as well as provide emotional support to patients and their families. It is therefore essential that these skills are developed among student nurses to enhance the quality of their clinical practice.",
"title": ""
},
{
"docid": "aaa2c8a7367086cd762f52b6a6c30df6",
"text": "Many mature term-based or pattern-based approaches have been used in the field of information filtering to generate users' information needs from a collection of documents. A fundamental assumption for these approaches is that the documents in the collection are all about one topic. However, in reality users' interests can be diverse and the documents in the collection often involve multiple topics. Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, and this has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering has not been so well explored. Patterns are always thought to be more discriminative than single terms for describing documents. However, the enormous amount of discovered patterns hinder them from being effectively and efficiently used in real applications, therefore, selection of the most discriminative and representative patterns from the huge amount of discovered patterns becomes crucial. To deal with the above mentioned limitations and problems, in this paper, a novel information filtering model, Maximum matched Pattern-based Topic Model (MPBTM), is proposed. The main distinctive features of the proposed model include: (1) user information needs are generated in terms of multiple topics; (2) each topic is represented by patterns; (3) patterns are generated from topic models and are organized in terms of their statistical and taxonomic features; and (4) the most discriminative and representative patterns, called Maximum Matched Patterns, are proposed to estimate the document relevance to the user's information needs in order to filter out irrelevant documents. Extensive experiments are conducted to evaluate the effectiveness of the proposed model by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model significantly outperforms both state-of-the-art term-based models and pattern-based models.",
"title": ""
},
{
"docid": "5459dc71fd40a576365f0afced64b6b7",
"text": "Cloud computing providers such as Amazon and Google have recently begun offering container-instances, which provide an efficient route to application deployment within a lightweight, isolated and well-defined execution environment. Cloud providers currently offer Container Service Platforms (CSPs), which orchestrate containerised applications. Existing CSP frameworks do not offer any form of intelligent resource scheduling: applications are usually scheduled individually, rather than taking a holistic view of all registered applications and available resources in the cloud. This can result in increased execution times for applications, resource wastage through underutilised container-instances, and a reduction in the number of applications that can be deployed, given the available resources. The research presented in this paper aims to extend existing systems by adding a cloud-based Container Management Service (CMS) framework that offers increased deployment density, scalability and resource efficiency. CMS provides additional functionalities for orchestrating containerised applications by joint optimisation of sets of containerised applications, and resource pool in multiple (geographical distributed) cloud regions. We evaluated CMS on a cloud-based CSP i.e., Amazon EC2 Container Management Service (ECS) and conducted extensive experiments using sets of CPU and Memory intensive containerised applications against the direct deployment strategy of Amazon ECS. The results show that CMS achieves up to 25% higher cluster utilisation, and up to 70% reduction in execution times.",
"title": ""
},
{
"docid": "4e8f7fdba06ae7973e3d25cf35399aaf",
"text": "Endometriosis is a benign and common disorder that is characterized by ectopic endometrium outside the uterus. Extrapelvic endometriosis, like of the vulva, is rarely seen. We report a case of a 47-year-old woman referred to our clinic due to complaints of a vulvar mass and periodic swelling of the mass at the time of menstruation. During surgery, the cyst ruptured and a chocolate-colored liquid escaped onto the surgical field. The cyst was extirpated totally. Hipstopathological examination showed findings compatible with endometriosis. She was asked to follow-up after three weeks. The patient had no complaints and the incision field was clear at the follow-up.",
"title": ""
},
{
"docid": "b5dc5268c2eb3b216aa499a639ddfbf9",
"text": "This paper describes a self-localization for indoor mobile robots based on integrating measurement values from multiple optical mouse sensors and a global camera. This paper consists of two parts. Firstly, we propose a dead-reckoning based on increments of the robot movements read directly from the floor using optical mouse sensors. Since the measurement values from multiple optical mouse sensors are compared to each other and only the reliable values are selected, accurate dead-reckoning can be realized compared with the conventional method based on increments of wheel rotations. Secondly, in order to realize robust localization, we propose a method of estimating position and orientation by integrating measured robot position (orientation information is not included) via global camera and dead-reckoning with the Kalman filter",
"title": ""
},
{
"docid": "23493c14053a4608203f8e77bd899445",
"text": "In this paper, lossless and near-lossless compression algorithms for multichannel electroencephalogram (EEG) signals are presented based on image and volumetric coding. Multichannel EEG signals have significant correlation among spatially adjacent channels; moreover, EEG signals are also correlated across time. Suitable representations are proposed to utilize those correlations effectively. In particular, multichannel EEG is represented either in the form of image (matrix) or volumetric data (tensor), next a wavelet transform is applied to those EEG representations. The compression algorithms are designed following the principle of “lossy plus residual coding,” consisting of a wavelet-based lossy coding layer followed by arithmetic coding on the residual. Such approach guarantees a specifiable maximum error between original and reconstructed signals. The compression algorithms are applied to three different EEG datasets, each with different sampling rate and resolution. The proposed multichannel compression algorithms achieve attractive compression ratios compared to algorithms that compress individual channels separately.",
"title": ""
},
{
"docid": "3712b06059d60211843a92a507075f86",
"text": "Automatically filtering relevant information about a real-world incident from Social Web streams and making the information accessible and findable in the given context of the incident are non-trivial scientific challenges. In this paper, we engineer and evaluate solutions that analyze the semantics of Social Web data streams to solve these challenges. We introduce Twitcident, a framework and Web-based system for filtering, searching and analyzing information about real-world incidents or crises. Given an incident, our framework automatically starts tracking and filtering information that is relevant for the incident from Social Web streams and Twitter particularly. It enriches the semantics of streamed messages to profile incidents and to continuously improve and adapt the information filtering to the current temporal context. Faceted search and analytical tools allow people and emergency services to retrieve particular information fragments and overview and analyze the current situation as reported on the Social Web.\n We put our Twitcident system into practice by connecting it to emergency broadcasting services in the Netherlands to allow for the retrieval of relevant information from Twitter streams for any incident that is reported by those services. We conduct large-scale experiments in which we evaluate (i) strategies for filtering relevant information for a given incident and (ii) search strategies for finding particular information pieces. Our results prove that the semantic enrichment offered by our framework leads to major and significant improvements of both the filtering and the search performance. A demonstration is available via: http://wis.ewi.tudelft.nl/twitcident/",
"title": ""
},
{
"docid": "9cb28706a45251e3d2fb5af64dd9351f",
"text": "This article proposes an informational perspective on comparison consequences in social judgment. It is argued that to understand the variable consequences of comparison, one has to examine what target knowledge is activated during the comparison process. These informational underpinnings are conceptualized in a selective accessibility model that distinguishes 2 fundamental comparison processes. Similarity testing selectively makes accessible knowledge indicating target-standard similarity, whereas dissimilarity testing selectively makes accessible knowledge indicating target-standard dissimilarity. These respective subsets of target knowledge build the basis for subsequent target evaluations, so that similarity testing typically leads to assimilation whereas dissimilarity testing typically leads to contrast. The model is proposed as a unifying conceptual framework that integrates diverse findings on comparison consequences in social judgment.",
"title": ""
}
] |
scidocsrr
|
ddd05d61d1f55eaa2126b71c913e8159
|
Trajectory similarity measures
|
[
{
"docid": "c1a8e30586aad77395e429556545675c",
"text": "We investigate techniques for analysis and retrieval of object trajectories in a two or three dimensional space. Such kind of data usually contain a great amount of noise, that makes all previously used metrics fail. Therefore, here we formalize non-metric similarity functions based on the Longest Common Subsequence (LCSS), which are very robust to noise and furthermore provide an intuitive notion of similarity between trajectories by giving more weight to the similar portions of the sequences. Stretching of sequences in time is allowed, as well as global translating of the sequences in space. Efficient approximate algorithms that compute these similarity measures are also provided. We compare these new methods to the widely used Euclidean and Time Warping distance functions (for real and synthetic data) and show the superiority of our approach, especially under the strong presence of noise. We prove a weaker version of the triangle inequality and employ it in an indexing structure to answer nearest neighbor queries. Finally, we present experimental results that validate the accuracy and efficiency of our approach.",
"title": ""
}
] |
[
{
"docid": "8f33b9a78fd252ef82008a4e6148f592",
"text": "Some natural language processing tasks can be learned from example corpora, but having enough examples for the task at hands can be a bottleneck. In this work we address how Wikipedia and DBpedia, two freely available language resources, can be used to support Named Entity Recognition, a fundamental task in Information Extraction and a necessary step of other tasks such as Co-reference Resolution and Relation Extraction.",
"title": ""
},
{
"docid": "e187403127990eb4b6c256ceb61d6f37",
"text": "Modern data analysis stands at the interface of statistics, computer science, and discrete mathematics. This volume describes new methods in this area, with special emphasis on classification and cluster analysis. Those methods are applied to problems in information retrieval, phylogeny, medical dia... This is the first book primarily dedicated to clustering using multiobjective genetic algorithms with extensive real-life applications in data mining and bioinformatics. The authors first offer detailed introductions to the relevant techniques-genetic algorithms, multiobjective optimization, soft ...",
"title": ""
},
{
"docid": "875917534e19961b06e09b6c1a872914",
"text": "It is found that the current manual water quality monitoring entails tedious process and is time consuming. To alleviate the problems caused by the manual monitoring and the lack of effective system for prawn farming, a remote water quality monitoring for prawn farming pond is proposed. The proposed system is leveraging on wireless sensors in detecting the water quality and Short Message Service (SMS) technology in delivering alert to the farmers upon detection of degradation of the water quality. Three water quality parameters that are critical to the prawn health are monitored, which are pH, temperature and dissolved oxygen. In this paper, the details of system design and implementation are presented. The results obtained in the preliminary survey study served as the basis for the development of the system prototype. Meanwhile, the results acquired through the usability testing imply that the system is able to meet the users’ needs. Key-Words: Remote monitoring, Water Quality, Wireless sensors",
"title": ""
},
{
"docid": "ea9b57d98a697f6b1f878b01434ccd6d",
"text": "Thin film bulk acoustic wave resonators (FBAR) using piezoelectric AlN thin films have attracted extensive research activities in the past few years. Highly c-axis oriented AlN thin films are particularly investigated for resonators operating at the fundamental thickness longitudinal mode. Depending on the processing conditions, tilted polarization (c-axis off the normal direction to the substrate surface) is often found for the as-deposited AlN thin films, which may leads to the coexistence of thickness longitudinal mode and shear mode for the thin film resonators. Knowing that the material properties are strongly crystalline orientation dependent for AlN thin films, a theoretical study is conducted to reveal the effect of tilted polarization on the frequency characteristics of thin film resonators. The input electric impedance of a thin film resonator is derived that includes both thickness longitudinal and thickness shear modes in a uniform equation. Based on the theoretical model, the effective material properties corresponding to the longitudinal and shear modes are calculated through the properties transformation between the original and new coordinate systems. The electric impedance spectra of dual mode AlN thin film resonators are calculated using appropriate materials properties and compared with experimental results. The results indicate that the frequency characteristics of thin film resonators vary with the tilted polarization angles. The coexistence of thickness longitudinal and shear modes in the thin film resonators may provide some flexibility in the design and fabrication of the FBAR devices.",
"title": ""
},
{
"docid": "d4345ee2baaa016fc38ba160e741b8ee",
"text": "Unstructured data, such as news and blogs, can provide valuable insights into the financial world. We present the NewsStream portal, an intuitive and easy-to-use tool for news analytics, which supports interactive querying and visualizations of the documents at different levels of detail. It relies on a scalable architecture for real-time processing of a continuous stream of textual data, which incorporates data acquisition, cleaning, natural-language preprocessing and semantic annotation components. It has been running for over two years and collected over 18 million news articles and blog posts. The NewsStream portal can be used to answer the questions when, how often, in what context, and with what sentiment was a financial entity or term mentioned in a continuous stream of news and blogs, and therefore providing a complement to news aggregators. We illustrate some features of our system in four use cases: relations between the rating agencies and the PIIGS countries, reflection of financial news on credit default swap (CDS) prices, the emergence of the Bitcoin digital currency, and visualizing how the world is connected through news.",
"title": ""
},
{
"docid": "074407b66a0a81f27625edc98dead4fc",
"text": "A new method, modified Bagging (mBagging) of Maximal Information Coefficient (mBoMIC), was developed for genome-wide identification. Traditional Bagging is inadequate to meet some requirements of genome-wide identification, in terms of statistical performance and time cost. To improve statistical performance and reduce time cost, an mBagging was developed to introduce Maximal Information Coefficient (MIC) into genomewide identification. The mBoMIC overcame the weakness of original MIC, i.e., the statistical power is inadequate and MIC values are volatile. The three incompatible measures of Bagging, i.e. time cost, statistical power and false positive rate, were significantly improved simultaneously. Compared with traditional Bagging, mBagging reduced time cost by 80%, improved statistical power by 15%, and decreased false positive rate by 31%. The mBoMIC has sensitivity and university in genome-wide identification. The SNPs identified only by mBoMIC have been reported as SNPs associated with cardiac disease.",
"title": ""
},
{
"docid": "3e13e2781fa5866533f86b5b773f9664",
"text": "Reconstruction of massive abdominal wall defects has long been a vexing clinical problem. A landmark development for the autogenous tissue reconstruction of these difficult wounds was the introduction of \"components of anatomic separation\" technique by Ramirez et al. This method uses bilateral, innervated, bipedicle, rectus abdominis-transversus abdominis-internal oblique muscle flap complexes transposed medially to reconstruct the central abdominal wall. Enamored with this concept, this institution sought to define the limitations and complications and to quantify functional outcome with the use of this technique. During a 4-year period (July of 1991 to 1995), 22 patients underwent reconstruction of massive midline abdominal wounds. The defects varied in size from 6 to 14 cm in width and from 10 to 24 cm in height. Causes included removal of infected synthetic mesh material (n = 7), recurrent hernia (n = 4), removal of split-thickness skin graft and dense abdominal wall cicatrix (n = 4), parastomal hernia (n = 2), primary incisional hernia (n = 2), trauma/enteric sepsis (n = 2), and tumor resection (abdominal wall desmoid tumor involving the right rectus abdominis muscle) (n = 1). Twenty patients were treated with mobilization of both rectus abdominis muscles, and in two patients one muscle complex was used. The plane of \"separation\" was the interface between the external and internal oblique muscles. A quantitative dynamic assessment of the abdominal wall was performed in two patients by using a Cybex TEF machine, with analysis of truncal flexion strength being undertaken preoperatively and at 6 months after surgery. Patients achieved wound healing in all cases with one operation. Minor complications included superficial infection in two patients and a wound seroma in one. One patient developed a recurrent incisional hernia 8 months postoperatively. There was one postoperative death caused by multisystem organ failure. One patient required the addition of synthetic mesh to achieve abdominal closure. This case involved a thin patient whose defect exceeded 16 cm in width. There has been no clinically apparent muscle weakness in the abdomen over that present preoperatively. Analysis of preoperative and postoperative truncal force generation revealed a 40 percent increase in strength in the two patients tested on a Cybex machine. Reoperation was possible through the reconstructed abdominal wall in two patients without untoward sequela. This operation is an effective method for autogenous reconstruction of massive midline abdominal wall defects. It can be used either as a primary mode of defect closure or to treat the complications of trauma, surgery, or various diseases.",
"title": ""
},
{
"docid": "12b94323c586de18e8de02e5a065903d",
"text": "Species of lactic acid bacteria (LAB) represent as potential microorganisms and have been widely applied in food fermentation worldwide. Milk fermentation process has been relied on the activity of LAB, where transformation of milk to good quality of fermented milk products made possible. The presence of LAB in milk fermentation can be either as spontaneous or inoculated starter cultures. Both of them are promising cultures to be explored in fermented milk manufacture. LAB have a role in milk fermentation to produce acid which is important as preservative agents and generating flavour of the products. They also produce exopolysaccharides which are essential as texture formation. Considering the existing reports on several health-promoting properties as well as their generally recognized as safe (GRAS) status of LAB, they can be widely used in the developing of new fermented milk products.",
"title": ""
},
{
"docid": "540f9a976fa3b6e8a928b7d0d50f64ac",
"text": "Due to the low-dimensional property of clean hyperspectral images (HSIs), many low-rank-based methods have been proposed to denoise HSIs. However, in an HSI, the noise intensity in different bands is often different, and most of the existing methods do not take this fact into consideration. In this paper, a noise-adjusted iterative low-rank matrix approximation (NAILRMA) method is proposed for HSI denoising. Based on the low-rank property of HSIs, the patchwise low-rank matrix approximation (LRMA) is established. To further separate the noise from the signal subspaces, an iterative regularization framework is proposed. Considering that the noise intensity in different bands is different, an adaptive iteration factor selection based on the noise variance of each HSI band is adopted. This noise-adjusted iteration strategy can effectively preserve the high-SNR bands and denoise the low-SNR bands. The randomized singular value decomposition (RSVD) method is then utilized to solve the NAILRMA optimization problem. A number of experiments were conducted in both simulated and real data conditions to illustrate the performance of the proposed NAILRMA method for HSI denoising.",
"title": ""
},
{
"docid": "413314afd18f6a73c626c4edd95e203b",
"text": "Data mining is the process of identifying the hidden patterns from large and complex data. It may provide crucial role in decision making for complex agricultural problems. Data visualisation is also equally important to understand the general trends of the effect of various factors influencing the crop yield. The present study examines the application of data visualisation techniques to find correlations between the climatic factors and rice crop yield. The study also applies data mining techniques to extract the knowledge from the historical agriculture data set to predict rice crop yield for Kharif season of Tropical Wet and Dry climatic zone of India. The data set has been visualised in Microsoft Office Excel using scatter plots. The classification algorithms have been executed in the free and open source data mining tool WEKA. The experimental results provided include sensitivity, specificity, accuracy, F1 score, Mathews correlation coefficient, mean absolute error, root mean squared error, relative absolute error and root relative squared error. General trends in the data visualisation show that decrease in precipitation in the selected climatic zone increases the rice crop yield and increase in minimum, average or maximum temperature for the season increases the rice crop yield. For the current data set experimental results show that J48 and LADTree achieved the highest accuracy, sensitivity and specificity. Classification performed by LWL classifier displayed the lowest accuracy, sensitivity and specificity results.",
"title": ""
},
{
"docid": "b8625942315177d2fa1c534e8be5eb9f",
"text": "The pantograph-overhead contact wire system is investigated by using an infrared camera. As the pantograph has a vertical motion because of the non-uniform elasticity of the catenary, in order to detect the temperature along the strip from a sequence of infrared images, a segment-tracking algorithm, based on the Hough transformation, has been employed. An analysis of the stored images could help maintenance operations revealing, for example, overheating of the pantograph strip, bursts of arcing, or an irregular positioning of the contact line. Obtained results are relevant for monitoring the status of the quality transmission of the current and for a predictive maintenance of the pantograph and of the catenary system. Examples of analysis from experimental data are reported in the paper.",
"title": ""
},
{
"docid": "8ef4e3c1b3b0008d263e0a55e92750cc",
"text": "We review recent breakthroughs in the silicon photonic technology and components, and describe progress in silicon photonic integrated circuits. Heterogeneous silicon photonics has recently demonstrated performance that significantly outperforms native III/V components. The impact active silicon photonic integrated circuits could have on interconnects, telecommunications, sensors, and silicon electronics is reviewed.",
"title": ""
},
{
"docid": "917ce5ca904c8866ddc84c113dc93b91",
"text": "Traditional media outlets are known to report political news in a biased way, potentially affecting the political beliefs of the audience and even altering their voting behaviors. Therefore, tracking bias in everyday news and building a platform where people can receive balanced news information is important. We propose a model that maps the news media sources along a dimensional dichotomous political spectrum using the co-subscriptions relationships inferred by Twitter links. By analyzing 7 million follow links, we show that the political dichotomy naturally arises on Twitter when we only consider direct media subscription. Furthermore, we demonstrate a real-time Twitter-based application that visualizes an ideological map of various media sources.",
"title": ""
},
{
"docid": "9bb86141611c54978033e2ea40f05b15",
"text": "In this work we investigate the problem of road scene semanti c segmentation using Deconvolutional Networks (DNs). Several c onstraints limit the practical performance of DNs in this context: firstly, the pa ucity of existing pixelwise labelled training data, and secondly, the memory const rai ts of embedded hardware, which rule out the practical use of state-of-theart DN architectures such as fully convolutional networks (FCN). To address the fi rst constraint, we introduce a Multi-Domain Road Scene Semantic Segmentation (M DRS3) dataset, aggregating data from six existing densely and sparsely lab elled datasets for training our models, and two existing, separate datasets for test ing their generalisation performance. We show that, while MDRS3 offers a greater volu me and variety of data, end-to-end training of a memory efficient DN does not yield satisfactory performance. We propose a new training strategy to over c me this, based on (i) the creation of a best-possible source network (S-Net ) from the aggregated data, ignoring time and memory constraints; and (ii) the tra nsfer of knowledge from S-Net to the memory-efficient target network (T-Net). W e evaluate different techniques for S-Net creation and T-Net transferral, and de monstrate that training a constrained deconvolutional network in this manner can un lock better performance than existing training approaches. Specifically, we s how that a target network can be trained to achieve improved accuracy versus an FC N despite using less than 1% of the memory. We believe that our approach can be useful beyond automotive scenarios where labelled data is similarly scar ce o fragmented and where practical constraints exist on the desired model size . We make available our network models and aggregated multi-domain dataset for reproducibility.",
"title": ""
},
{
"docid": "bea3ba14723c729c2c07b21f8569dce6",
"text": "In this paper, an action planning algorithm is presented for a reconfigurable hybrid leg–wheel mobile robot. Hybrid leg–wheel robots have recently receiving growing interest from the space community to explore planets, as they offer a solution to improve speed and mobility on uneven terrain. One critical issue connected with them is the study of an appropriate strategy to define when to use one over the other locomotion mode, depending on the soil properties and topology. Although this step is crucial to reach the full hybrid mechanism’s potential, little attention has been devoted to this topic. Given an elevation map of the environment, we developed an action planner that selects the appropriate locomotion mode along an optimal path toward a point of scientific interest. This tool is helpful for the space mission team to decide the next move of the robot during the exploration. First, a candidate path is generated based on topology and specifications’ criteria functions. Then, switching actions are defined along this path based on the robot’s performance in each motion mode. Finally, the path is rated based on the energy profile evaluated using a dynamic simulator. The proposed approach is applied to a concept prototype of a reconfigurable hybrid wheel–leg robot for planetary exploration through extensive simulations and real experiments. © Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2010",
"title": ""
},
{
"docid": "42ae9ed79bfa818870e67934c87d83c9",
"text": "It has been estimated that in urban scenarios up to 30% of the traffic is due to vehicles looking for a free parking space. Thanks to recent technological evolutions, it is now possible to have at least a partial coverage of real-time data of parking space availability, and some preliminary mobile services are able to guide drivers towards free parking spaces. Nevertheless, the integration of this data within car navigators is challenging, mainly because (I) current In-Vehicle Telematic systems are not connected, and (II) they have strong limitations in terms of storage capabilities. To overcome these issues, in this paper we present a back-end based approach to learn historical models of parking availability per street. These compact models can then be easily stored on the map in the vehicle. In particular, we investigate the trade-off between the granularity level of the detailed spatial and temporal representation of parking space availability vs. The achievable prediction accuracy, using different spatio-temporal clustering strategies. The proposed solution is evaluated using five months of parking availability data, publicly available from the project Spark, based in San Francisco. Results show that clustering can reduce the needed storage up to 99%, still having an accuracy of around 70% in the predictions.",
"title": ""
},
{
"docid": "ea71723e88685f5c8c29296d3d8e3197",
"text": "In this paper, we present a complete system for automatic face replacement in images. Our system uses a large library of face images created automatically by downloading images from the internet, extracting faces using face detection software, and aligning each extracted face to a common coordinate system. This library is constructed off-line, once, and can be efficiently accessed during face replacement. Our replacement algorithm has three main stages. First, given an input image, we detect all faces that are present, align them to the coordinate system used by our face library, and select candidate face images from our face library that are similar to the input face in appearance and pose. Second, we adjust the pose, lighting, and color of the candidate face images to match the appearance of those in the input image, and seamlessly blend in the results. Third, we rank the blended candidate replacements by computing a match distance over the overlap region. Our approach requires no 3D model, is fully automatic, and generates highly plausible results across a wide range of skin tones, lighting conditions, and viewpoints. We show how our approach can be used for a variety of applications including face de-identification and the creation of appealing group photographs from a set of images. We conclude with a user study that validates the high quality of our replacement results, and a discussion on the current limitations of our system.",
"title": ""
},
{
"docid": "398169d654c89191090c04fa930e5e62",
"text": "Psychedelic drug flashbacks have been a puzzling clinical phenomenon observed by clinicians. Flashbacks are defined as transient, spontaneous recurrences of the psychedelic drug effect appearing after a period of normalcy following an intoxication of psychedelics. The paper traces the evolution of the concept of flashback and gives examples of the varieties encountered. Although many drugs have been advocated for the treatment of flashback, flashbacks generally decrease in intensity and frequency with abstinence from psychedelic drugs.",
"title": ""
},
{
"docid": "0814d93829261505cb88d33a73adc4e7",
"text": "Partial shading of a photovoltaic array is the condition under which different modules in the array experience different irradiance levels due to shading. This difference causes mismatch between the modules, leading to undesirable effects such as reduction in generated power and hot spots. The severity of these effects can be considerably reduced by photovoltaic array reconfiguration. This paper proposes a novel mathematical formulation for the optimal reconfiguration of photovoltaic arrays to minimize partial shading losses. The paper formulates the reconfiguration problem as a mixed integer quadratic programming problem and finds the optimal solution using a branch and bound algorithm. The proposed formulation can be used for an equal or nonequal number of modules per row. Moreover, it can be used for fully reconfigurable or partially reconfigurable arrays. The improvement resulting from the reconfiguration with respect to the existing photovoltaic interconnections is demonstrated by extensive simulation results.",
"title": ""
}
] |
scidocsrr
|
59708a44c0315cb1c93bedc32e054a5f
|
Supporting Drivers in Keeping Safe Speed and Safe Distance: The SASPENCE Subproject Within the European Framework Programme 6 Integrating Project PReVENT
|
[
{
"docid": "c1956e4c6b732fa6a420d4c69cfbe529",
"text": "To improve the safety and comfort of a human-machine system, the machine needs to ‘know,’ in a real time manner, the human operator in the system. The machine’s assistance to the human can be fine tuned if the machine is able to sense the human’s state and intent. Related to this point, this paper discusses issues of human trust in automation, automation surprises, responsibility and authority. Examples are given of a driver assistance system for advanced automobile.",
"title": ""
},
{
"docid": "7cef2fac422d9fc3c3ffbc130831b522",
"text": "Development of advanced driver assistance systems with vehicle hardware-in-the-loop simulations , \" (Received 00 Month 200x; In final form 00 Month 200x) This paper presents a new method for the design and validation of advanced driver assistance systems (ADASs). With vehicle hardware-in-the-loop (VEHIL) simulations the development process, and more specifically the validation phase, of intelligent vehicles is carried out safer, cheaper, and more manageable. In the VEHIL laboratory a full-scale ADAS-equipped vehicle is set up in a hardware-in-the-loop simulation environment, where a chassis dynamometer is used to emulate the road interaction and robot vehicles to represent other traffic. In this controlled environment the performance and dependability of an ADAS is tested to great accuracy and reliability. The working principle and the added value of VEHIL are demonstrated with test results of an adaptive cruise control and a forward collision warning system. Based on the 'V' diagram, the position of VEHIL in the development process of ADASs is illustrated.",
"title": ""
}
] |
[
{
"docid": "9f3404d9e2941a1e0987f215e6d82a54",
"text": "Augmented reality technologies allow people to view and interact with virtual objects that appear alongside physical objects in the real world. For augmented reality applications to be effective, users must be able to accurately perceive the intended real world location of virtual objects. However, when creating augmented reality applications, developers are faced with a variety of design decisions that may affect user perceptions regarding the real world depth of virtual objects. In this paper, we conducted two experiments using a perceptual matching task to understand how shading, cast shadows, aerial perspective, texture, dimensionality (i.e., 2D vs. 3D shapes) and billboarding affected participant perceptions of virtual object depth relative to real world targets. The results of these studies quantify trade-offs across virtual object designs to inform the development of applications that take advantage of users' visual abilities to better blend the physical and virtual world.",
"title": ""
},
{
"docid": "9de44948e28892190f461199a1d33935",
"text": "As more and more data is provided in RDF format, storing huge amounts of RDF data and efficiently processing queries on such data is becoming increasingly important. The first part of the lecture will introduce state-of-the-art techniques for scalably storing and querying RDF with relational systems, including alternatives for storing RDF, efficient index structures, and query optimization techniques. As centralized RDF repositories have limitations in scalability and failure tolerance, decentralized architectures have been proposed. The second part of the lecture will highlight system architectures and strategies for distributed RDF processing. We cover search engines as well as federated query processing, highlight differences to classic federated database systems, and discuss efficient techniques for distributed query processing in general and for RDF data in particular. Moreover, for the last part of this chapter, we argue that extracting knowledge from the Web is an excellent showcase – and potentially one of the biggest challenges – for the scalable management of uncertain data we have seen so far. The third part of the lecture is thus intended to provide a close-up on current approaches and platforms to make reasoning (e.g., in the form of probabilistic inference) with uncertain RDF data scalable to billions of triples. 1 RDF in centralized relational databases The increasing availability and use of RDF-based information in the last decade has led to an increasing need for systems that can store RDF and, more importantly, efficiencly evaluate complex queries over large bodies of RDF data. The database community has developed a large number of systems to satisfy this need, partly reusing and adapting well-established techniques from relational databases [122]. The majority of these systems can be grouped into one of the following three classes: 1. Triple stores that store RDF triples in a single relational table, usually with additional indexes and statistics, 2. vertically partitioned tables that maintain one table for each property, and 3. Schema-specific solutions that store RDF in a number of property tables where several properties are jointly represented. In the following sections, we will describe each of these classes in detail, focusing on two important aspects of these systems: storage and indexing, i.e., how are RDF triples mapped to relational tables and which additional support structures are created; and query processing, i.e., how SPARQL queries are mapped to SQL, which additional operators are introduced, and how efficient execution plans for queries are determined. In addition to these purely relational solutions, a number of specialized RDF systems has been proposed that built on nonrelational technologies, we will briefly discuss some of these systems. Note that we will focus on SPARQL processing, which is not aware of underlying RDF/S or OWL schema and cannot exploit any information about subclasses; this is usually done in an additional layer on top. We will explain especially the different storage variants with the running example from Figure 1, some simple RDF facts from a university scenario. Here, each line corresponds to a fact (triple, statement), with a subject (usually a resource), a property (or predicate), and an object (which can be a resource or a constant). Even though resources are represented by URIs in RDF, we use string constants here for simplicity. A collection of RDF facts can also be represented as a graph. Here, resources (and constants) are nodes, and for each fact <s,p,o>, an edge from s to o is added with label p. Figure 2 shows the graph representation for the RDF example from Figure 1. <Katja,teaches,Databases> <Katja,works_for,MPI Informatics> <Katja,PhD_from,TU Ilmenau> <Martin,teaches,Databases> <Martin,works_for,MPI Informatics> <Martin,PhD_from,Saarland University> <Ralf,teaches,Information Retrieval> <Ralf,PhD_from,Saarland University> <Ralf,works_for,Saarland University> <Saarland University,located_in,Germany> <MPI Informatics,located_in,Germany> Fig. 1. Running example for RDF data",
"title": ""
},
{
"docid": "5288f4bbc2c9b8531042ce25b8df05b0",
"text": "Existing neural machine translation systems do not explicitly model what has been translated and what has not during the decoding phase. To address this problem, we propose a novel mechanism that separates the source information into two parts: translated Past contents and untranslated Future contents, which are modeled by two additional recurrent layers. The Past and Future contents are fed to both the attention model and the decoder states, which provides Neural Machine Translation (NMT) systems with the knowledge of translated and untranslated contents. Experimental results show that the proposed approach significantly improves the performance in Chinese-English, German-English, and English-German translation tasks. Specifically, the proposed model outperforms the conventional coverage model in terms of both the translation quality and the alignment error rate.",
"title": ""
},
{
"docid": "4dbbcaf264cc9beda8644fa926932d2e",
"text": "It is relatively stress-free to write about computer games as nothing too much has been said yet, and almost anything goes. The situation is pretty much the same when it comes to writing about games and gaming in general. The sad fact with alarming cumulative consequences is that they are undertheorized; there are Huizinga, Caillois and Ehrmann of course, and libraries full of board game studies,in addition to game theory and bits and pieces of philosophy—most notably those of Wittgenstein— but they won’t get us very far with computer games. So if there already is or soon will be a legitimate field for computer game studies, this field is also very open to intrusions and colonisations from the already organized scholarly tribes. Resisting and beating them is the goal of our first survival game in this paper, as what these emerging studies need is independence, or at least relative independence.",
"title": ""
},
{
"docid": "1510979ae461bfff3deb46dec2a81798",
"text": "State-of-the-art methods for relation classification are mostly based on statistical machine learning, and the performance heavily depends on the quality of the extracted features. The extracted features are often derived from the output of pre-existing NLP systems, which lead to the error propagation of existing tools and hinder the performance of the system. In this paper, we exploit a convolutional Deep Neural Network (DNN) to extract lexical and sentence level features. Our method takes all the word tokens as input without complicated pre-processing. First, all the word tokens are transformed to vectors by looking up word embeddings1. Then, lexical level features are extracted according to the given nouns. Meanwhile, sentence level features are learned using a convolutional approach. These two level features are concatenated as the final extracted feature vector. Finally, the features are feed into a softmax classifier to predict the relationship between two marked nouns. Experimental results show that our approach significantly outperforms the state-of-the-art methods.",
"title": ""
},
{
"docid": "1196ab65ddfcedb8775835f2e176576f",
"text": "Faster R-CNN achieves state-of-the-art performance on generic object detection. However, a simple application of this method to a large vehicle dataset performs unimpressively. In this paper, we take a closer look at this approach as it applies to vehicle detection. We conduct a wide range of experiments and provide a comprehensive analysis of the underlying structure of this model. We show that through suitable parameter tuning and algorithmic modification, we can significantly improve the performance of Faster R-CNN on vehicle detection and achieve competitive results on the KITTI vehicle dataset. We believe our studies are instructive for other researchers investigating the application of Faster R-CNN to their problems and datasets.",
"title": ""
},
{
"docid": "d9ad51299d4afb8075bd911b6655cf16",
"text": "To assess whether the passive leg raising test can help in predicting fluid responsiveness. Nonsystematic review of the literature. Passive leg raising has been used as an endogenous fluid challenge and tested for predicting the hemodynamic response to fluid in patients with acute circulatory failure. This is now easy to perform at the bedside using methods that allow a real time measurement of systolic blood flow. A passive leg raising induced increase in descending aortic blood flow of at least 10% or in echocardiographic subaortic flow of at least 12% has been shown to predict fluid responsiveness. Importantly, this prediction remains very valuable in patients with cardiac arrhythmias or spontaneous breathing activity. Passive leg raising allows reliable prediction of fluid responsiveness even in patients with spontaneous breathing activity or arrhythmias. This test may come to be used increasingly at the bedside since it is easy to perform and effective, provided that its effects are assessed by a real-time measurement of cardiac output.",
"title": ""
},
{
"docid": "8f9929a21107e6edf9a2aa5f69a3c012",
"text": "Rice is one of the most cultivated cereal in Asian countries and Vietnam in particular. Good seed germination is important for rice seed quality, that impacts the rice production and crop yield. Currently, seed germination evaluation is carried out manually by experienced persons. This is a tedious and time-consuming task. In this paper, we present a system for automatic evaluation of rice seed germination rate based on advanced techniques in computer vision and machine learning. We propose to use U-Net - a convolutional neural network - for segmentation and separation of rice seeds. Further processing such as computing distance transform and thresholding will be applied on the segmented regions for rice seed detection. Finally, ResNet is utilized to classify segmented rice seed regions into two classes: germinated and non- germinated seeds. Our contributions in this paper are three-fold. Firstly, we propose a framework which confirms that convolutional neural networks are better than traditional methods for both segmentation and classification tasks (with F1- scores of 93.38\\% and 95.66\\% respectively). Secondly, we deploy successfully the automatic tool in a real application for estimating rice germination rate. Finally, we introduce a new dataset of 1276 images of rice seeds from 7 to 8 seed varieties germinated during 6 to 10 days. This dataset is publicly available for research purpose.",
"title": ""
},
{
"docid": "4d2f03a786f8addf0825b5bc7701c621",
"text": "Integrated Design of Agile Missile Guidance and Autopilot Systems By P. K. Menon and E. J. Ohlmeyer Abstract Recent threat assessments by the US Navy have indicated the need for improving the accuracy of defensive missiles. This objective can only be achieved by enhancing the performance of the missile subsystems and by finding methods to exploit the synergism existing between subsystems. As a first step towards the development of integrated design methodologies, this paper develops a technique for integrated design of missile guidance and autopilot systems. Traditional approach for the design of guidance and autopilot systems has been to design these subsystems separately and then to integrate them together before verifying their performance. Such an approach does not exploit any beneficial relationships between these and other subsystems. The application of the feedback linearization technique for integrated guidance-autopilot system design is discussed. Numerical results using a six degree-of-freedom missile simulation are given. Integrated guidance-autopilot systems are expected to result in significant improvements in missile performance, leading to lower weight and enhanced lethality. Both of these factors will lead to a more effective, lower-cost weapon system. Integrated system design methods developed under the present research effort also have extensive applications in high performance aircraft autopilot and guidance systems.",
"title": ""
},
{
"docid": "f5886c4e73fed097e44d6a0e052b143f",
"text": "A polynomial filtered Davidson-type algorithm is proposed for symmetric eigenproblems, in which the correction-equation of the Davidson approach is replaced by a polynomial filtering step. The new approach has better global convergence and robustness properties when compared with standard Davidson-type methods. The typical filter used in this paper is based on Chebyshev polynomials. The goal of the polynomial filter is to amplify components of the desired eigenvectors in the subspace, which has the effect of reducing both the number of steps required for convergence and the cost in orthogonalizations and restarts. Numerical results are presented to show the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "115c06a2e366293850d1ef3d60f2a672",
"text": "Accurate network traffic identification plays important roles in many areas such as traffic engineering, QoS and intrusion detection etc. The emergence of many new encrypted applications which use dynamic port numbers and masquerading techniques causes the most challenging problem in network traffic identification field. One of the challenging issues for existing traffic identification methods is that they can’t classify online encrypted traffic. To overcome the drawback of the previous identification scheme and to meet the requirements of the encrypted network activities, our work mainly focuses on how to build an online Internet traffic identification based on flow information. We propose real-time encrypted traffic identification based on flow statistical characteristics using machine learning in this paper. We evaluate the effectiveness of our proposed method through the experiments on different real traffic traces. By experiment results and analysis, this method can classify online encrypted network traffic with high accuracy and robustness.",
"title": ""
},
{
"docid": "c6a7c67fa77d2a5341b8e01c04677058",
"text": "Human brain imaging studies have shown that greater amygdala activation to emotional relative to neutral events leads to enhanced episodic memory. Other studies have shown that fearful faces also elicit greater amygdala activation relative to neutral faces. To the extent that amygdala recruitment is sufficient to enhance recollection, these separate lines of evidence predict that recognition memory should be greater for fearful relative to neutral faces. Experiment 1 demonstrated enhanced memory for emotionally negative relative to neutral scenes; however, fearful faces were not subject to enhanced recognition across a variety of delays (15 min to 2 wk). Experiment 2 demonstrated that enhanced delayed recognition for emotional scenes was associated with increased sympathetic autonomic arousal, indexed by the galvanic skin response, relative to fearful faces. These results suggest that while amygdala activation may be necessary, it alone is insufficient to enhance episodic memory formation. It is proposed that a sufficient level of systemic arousal is required to alter memory consolidation resulting in enhanced recollection of emotional events.",
"title": ""
},
{
"docid": "fe4a59449e61d47ccf04f4ef70a6ba72",
"text": "Bipedal robots are currently either slow, energetically inefficient and/or require a lot of control to maintain their stability. This paper introduces the FastRunner, a bipedal robot based on a new leg architecture. Simulation results of a Planar FastRunner demonstrate that legged robots can run fast, be energy efficient and inherently stable. The simulated FastRunner has a cost of transport of 1.4 and requires only a local feedback of the hip position to reach 35.4 kph from stop in simulation.",
"title": ""
},
{
"docid": "c5cc4da2906670c30fc0bac3040217bd",
"text": "Many popular problems in robotics and computer vision including various types of simultaneous localization and mapping (SLAM) or bundle adjustment (BA) can be phrased as least squares optimization of an error function that can be represented by a graph. This paper describes the general structure of such problems and presents g2o, an open-source C++ framework for optimizing graph-based nonlinear error functions. Our system has been designed to be easily extensible to a wide range of problems and a new problem typically can be specified in a few lines of code. The current implementation provides solutions to several variants of SLAM and BA. We provide evaluations on a wide range of real-world and simulated datasets. The results demonstrate that while being general g2o offers a performance comparable to implementations of state-of-the-art approaches for the specific problems.",
"title": ""
},
{
"docid": "43d88fd321ad7a6c1bbb2a0054a77959",
"text": "We formulate the problem of 3D human pose estimation and tracking as one of inference in a graphical model. Unlike traditional kinematic tree representations, our model of the body is a collection of loosely-connected body-parts. In particular, we model the body using an undirected graphical model in which nodes correspond to parts and edges to kinematic, penetration, and temporal constraints imposed by the joints and the world. These constraints are encoded using pair-wise statistical distributions, that are learned from motion-capture training data. Human pose and motion estimation is formulated as inference in this graphical model and is solved using Particle Message Passing (PaMPas). PaMPas is a form of non-parametric belief propagation that uses a variation of particle filtering that can be applied over a general graphical model with loops. The loose-limbed model and decentralized graph structure allow us to incorporate information from “bottom-up” visual cues, such as limb and head detectors, into the inference process. These detectors enable automatic initialization and aid recovery from transient tracking failures. We illustrate the method by automatically tracking people in multi-view imagery using a set of calibrated cameras and present quantitative evaluation using the HumanEva dataset.",
"title": ""
},
{
"docid": "a6f1d81e6b4a20d892c9292fb86d2c1d",
"text": "Research in biomaterials and biomechanics has fueled a large part of the significant revolution associated with osseointegrated implants. Additional key areas that may become even more important--such as guided tissue regeneration, growth factors, and tissue engineering--could not be included in this review because of space limitations. All of this work will no doubt continue unabated; indeed, it is probably even accelerating as more clinical applications are found for implant technology and related therapies. An excellent overall summary of oral biology and dental implants recently appeared in a dedicated issue of Advances in Dental Research. Many advances have been made in the understanding of events at the interface between bone and implants and in developing methods for controlling these events. However, several important questions still remain. What is the relationship between tissue structure, matrix composition, and biomechanical properties of the interface? Do surface modifications alter the interfacial tissue structure and composition and the rate at which it forms? If surface modifications change the initial interface structure and composition, are these changes retained? Do surface modifications enhance biomechanical properties of the interface? As current understanding of the bone-implant interface progresses, so will development of proactive implants that can help promote desired outcomes. However, in the midst of the excitement born out of this activity, it is necessary to remember that the needs of the patient must remain paramount. It is also worth noting another as-yet unsatisfied need. With all of the new developments, continuing education of clinicians in the expert use of all of these research advances is needed. For example, in the area of biomechanical treatment planning, there are still no well-accepted biomaterials/biomechanics \"building codes\" that can be passed on to clinicians. Also, there are no readily available treatment-planning tools that clinicians can use to explore \"what-if\" scenarios and other design calculations of the sort done in modern engineering. No doubt such approaches could be developed based on materials already in the literature, but unfortunately much of what is done now by clinicians remains empirical. A worthwhile task for the future is to find ways to more effectively deliver products of research into the hands of clinicians.",
"title": ""
},
{
"docid": "dbd92d8f7fc050229379ebfb87e22a5f",
"text": "The presence of third-party tracking on websites has become customary. However, our understanding of the thirdparty ecosystem is still very rudimentary. We examine thirdparty trackers from a geographical perspective, observing the third-party tracking ecosystem from 29 countries across the globe. When examining the data by region (North America, South America, Europe, East Asia, Middle East, and Oceania), we observe significant geographical variation between regions and countries within regions. We find trackers that focus on specific regions and countries, and some that are hosted in countries outside their expected target tracking domain. Given the differences in regulatory regimes between jurisdictions, we believe this analysis sheds light on the geographical properties of this ecosystem and on the problems that these may pose to our ability to track and manage the different data silos that now store personal data about us all.",
"title": ""
},
{
"docid": "745cdbb442c73316f691dc20cc696f31",
"text": "Computer-generated texts, whether from Natural Language Generation (NLG) or Machine Translation (MT) systems, are often post-edited by humans before being released to users. The frequency and type of post-edits is a measure of how well the system works, and can be used for evaluation. We describe how we have used post-edit data to evaluate SUMTIME-MOUSAM, an NLG system that produces weather forecasts.",
"title": ""
},
{
"docid": "313c8ba6d61a160786760543658185df",
"text": "In this review, we collate information about ticks identified in different parts of the Sudan and South Sudan since 1956 in order to identify gaps in tick prevalence and create a map of tick distribution. This will avail basic data for further research on ticks and policies for the control of tick-borne diseases. In this review, we discuss the situation in the Republic of South Sudan as well as Sudan. For this purpose we have divided Sudan into four regions, namely northern Sudan (Northern and River Nile states), central Sudan (Khartoum, Gazera, White Nile, Blue Nile and Sennar states), western Sudan (North and South Kordofan and North, South and West Darfour states) and eastern Sudan (Red Sea, Kassala and Gadarif states).",
"title": ""
},
{
"docid": "c9968b5dbe66ad96605c88df9d92a2fb",
"text": "We present an analysis of the population dynamics and demographics of Amazon Mechanical Turk workers based on the results of the survey that we conducted over a period of 28 months, with more than 85K responses from 40K unique participants. The demographics survey is ongoing (as of November 2017), and the results are available at http://demographics.mturk-tracker.com: we provide an API for researchers to download the survey data. We use techniques from the field of ecology, in particular, the capture-recapture technique, to understand the size and dynamics of the underlying population. We also demonstrate how to model and account for the inherent selection biases in such surveys. Our results indicate that there are more than 100K workers available in Amazon»s crowdsourcing platform, the participation of the workers in the platform follows a heavy-tailed distribution, and at any given time there are more than 2K active workers. We also show that the half-life of a worker on the platform is around 12-18 months and that the rate of arrival of new workers balances the rate of departures, keeping the overall worker population relatively stable. Finally, we demonstrate how we can estimate the biases of different demographics to participate in the survey tasks, and show how to correct such biases. Our methodology is generic and can be applied to any platform where we are interested in understanding the dynamics and demographics of the underlying user population.",
"title": ""
}
] |
scidocsrr
|
8abdf33714c5ebfe22d1912ed9b9ccc7
|
From Spin to Swindle: Identifying Falsification in Financial Text
|
[
{
"docid": "83dec7aa3435effc3040dfb08cb5754a",
"text": "This paper examines the relationship between annual report readability and firm performance and earnings persistence. This is motivated by the Securities and Exchange Commission’s plain English disclosure regulations that attempt to make corporate disclosures easier to read for ordinary investors. I measure the readability of public company annual reports using both the Fog Index from computational linguistics and the length of the document. I find that the annual reports of firms with lower earnings are harder to read (i.e., they have higher Fog and are longer). Moreover, the positive earnings of firms with annual reports that are easier to read are more persistent. This suggests that managers may be opportunistically choosing the readability of annual reports to hide adverse information from investors.",
"title": ""
},
{
"docid": "3aaa2d625cddd46f1a7daddbb3e2b23d",
"text": "Text summarization is the task of shortening text documents but retaining their overall meaning and information. A good summary should highlight the main concepts of any text document. Many statistical-based, location-based and linguistic-based techniques are available for text summarization. This paper has described a novel hybrid technique for automatic summarization of Punjabi text. Punjabi is an official language of Punjab State in India. There are very few linguistic resources available for Punjabi. The proposed summarization system is hybrid of conceptual-, statistical-, location- and linguistic-based features for Punjabi text. In this system, four new location-based features and two new statistical features (entropy measure and Z score) are used and results are very much encouraging. Support vector machine-based classifier is also used to classify Punjabi sentences into summary and non-summary sentences and to handle imbalanced data. Synthetic minority over-sampling technique is applied for over-sampling minority class data. Results of proposed system are compared with different baseline systems, and it is found that F score, Precision, Recall and ROUGE-2 score of our system are reasonably well as compared to other baseline systems. Moreover, summary quality of proposed system is comparable to the gold summary.",
"title": ""
}
] |
[
{
"docid": "d99b2bab853f867024d1becb0835548d",
"text": "In this paper, we tackle challenges in migrating enterprise services into hybrid cloud-based deployments, where enterprise operations are partly hosted on-premise and partly in the cloud. Such hybrid architectures enable enterprises to benefit from cloud-based architectures, while honoring application performance requirements, and privacy restrictions on what services may be migrated to the cloud. We make several contributions. First, we highlight the complexity inherent in enterprise applications today in terms of their multi-tiered nature, large number of application components, and interdependencies. Second, we have developed a model to explore the benefits of a hybrid migration approach. Our model takes into account enterprise-specific constraints, cost savings, and increased transaction delays and wide-area communication costs that may result from the migration. Evaluations based on real enterprise applications and Azure-based cloud deployments show the benefits of a hybrid migration approach, and the importance of planning which components to migrate. Third, we shed insight on security policies associated with enterprise applications in data centers. We articulate the importance of ensuring assurable reconfiguration of security policies as enterprise applications are migrated to the cloud. We present algorithms to achieve this goal, and demonstrate their efficacy on realistic migration scenarios.",
"title": ""
},
{
"docid": "6e94a736aa913fa4585025ec52675f48",
"text": "A benefit of model-driven engineering relies on the automatic generation of artefacts from high-level models through intermediary levels using model transformations. In such a process, the input must be well designed, and the model transformations should be trustworthy. Because of the specificities of models and transformations, classical software test techniques have to be adapted. Among these techniques, mutation analysis has been ported, and a set of mutation operators has been defined. However, it currently requires considerable manual work and suffers from the test data set improvement activity. This activity is a difficult and time-consuming job and reduces the benefits of the mutation analysis. This paper addresses the test data set improvement activity. Model transformation traceability in conjunction with a model of mutation operators and a dedicated algorithm allow to automatically or semi-automatically produce improved test models. The approach is validated and illustrated in two case studies written in Kermeta. Copyright © 2014 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "19350a76398e0054be44c73618cdfb33",
"text": "An emerging class of data-intensive applications involve the geographically dispersed extraction of complex scientific information from very large collections of measured or computed data. Such applications arise, for example, in experimental physics, where the data in question is generated by accelerators, and in simulation science, where the data is generated by supercomputers. So-called Data Grids provide essential infrastructure for such applications, much as the Internet provides essential services for applications such as e-mail and the Web. We describe here two services that we believe are fundamental to any Data Grid: reliable, high-speed transport and replica management. Our high-speed transport service, GridFTP, extends the popular FTP protocol with new features required for Data Grid applications, such as striping and partial file access. Our replica management service integrates a replica catalog with GridFTP transfers to provide for the creation, registration, location, and management of dataset replicas. We present the design of both services and also preliminary performance results. Our implementations exploit security and other services provided by the Globus Toolkit.",
"title": ""
},
{
"docid": "6033682cf01008f027877e3fda4511f8",
"text": "The HER-2/neu oncogene is a member of the erbB-like oncogene family, and is related to, but distinct from, the epidermal growth factor receptor. This gene has been shown to be amplified in human breast cancer cell lines. In the current study, alterations of the gene in 189 primary human breast cancers were investigated. HER-2/neu was found to be amplified from 2- to greater than 20-fold in 30% of the tumors. Correlation of gene amplification with several disease parameters was evaluated. Amplification of the HER-2/neu gene was a significant predictor of both overall survival and time to relapse in patients with breast cancer. It retained its significance even when adjustments were made for other known prognostic factors. Moreover, HER-2/neu amplification had greater prognostic value than most currently used prognostic factors, including hormonal-receptor status, in lymph node-positive disease. These data indicate that this gene may play a role in the biologic behavior and/or pathogenesis of human breast cancer.",
"title": ""
},
{
"docid": "e6953902f5fc0bb9f98d9c632b2ac26e",
"text": "In high voltage (HV) flyback charging circuits, the importance of transformer parasitics holds a significant part in the overall system parasitics. The HV transformers have a larger number of turns on the secondary side that leads to higher self-capacitance which is inevitable. The conventional wire-wound transformer (CWT) has limitation over the design with larger self-capacitance including increased size and volume. For capacitive load in flyback charging circuit these self-capacitances on the secondary side gets added with device capacitances and dominates the load. For such applications the requirement is to have a transformer with minimum self-capacitances and low profile. In order to achieve the above requirements Planar Transformer (PT) design can be implemented with windings as tracks in Printed Circuit Boards (PCB) each layer is insulated by the FR4 material which aids better insulation. Finite Element Model (FEM) has been developed to obtain the self-capacitance in between the layers for larger turns on the secondary side. The modelled hardware prototype of the Planar Transformer has been characterised for open circuit and short circuit test using Frequency Response Analyser (FRA). The results obtained from FEM and FRA are compared and presented.",
"title": ""
},
{
"docid": "3db8dc56e573488c5085bf5a61ea0d7f",
"text": "This paper proposes new approximate coloring and other related techniques which markedly improve the run time of the branchand-bound algorithm MCR (J. Global Optim., 37, 95–111, 2007), previously shown to be the fastest maximum-clique-finding algorithm for a large number of graphs. The algorithm obtained by introducing these new techniques in MCR is named MCS. It is shown that MCS is successful in reducing the search space quite efficiently with low overhead. Consequently, it is shown by extensive computational experiments that MCS is remarkably faster than MCR and other existing algorithms. It is faster than the other algorithms by an order of magnitude for several graphs. In particular, it is faster than MCR for difficult graphs of very high density and for very large and sparse graphs, even though MCS is not designed for any particular type of graphs. MCS can be faster than MCR by a factor of more than 100,000 for some extremely dense random graphs.",
"title": ""
},
{
"docid": "2b202063a897e03f797918a8731fd88a",
"text": "Children quickly acquire basic grammatical facts about their native language. Does this early syntactic knowledge involve knowledge of words or rules? According to lexical accounts of acquisition, abstract syntactic and semantic categories are not primitive to the language-acquisition system; thus, early language comprehension and production are based on verb-specific knowledge. The present experiments challenge this account: We probed the abstractness of young children's knowledge of syntax by testing whether 25- and 21-month-olds extend their knowledge of English word order to new verbs. In four experiments, children used word order appropriately to interpret transitive sentences containing novel verbs. These findings demonstrate that although toddlers have much to learn about their native languages, they represent language experience in terms of an abstract mental vocabulary. These abstract representations allow children to rapidly detect general patterns in their native language, and thus to learn rules as well as words from the start.",
"title": ""
},
{
"docid": "c4c5401a3aac140a5c0ef254b08943b4",
"text": "Tasks recognizing named entities such as products, people names, or locations from documents have recently received significant attention in the literature. Many solutions to these tasks assume the existence of reference entity tables. An important challenge that needs to be addressed in the entity extraction task is that of ascertaining whether or not a candidate string approximately matches with a named entity in a given reference table.\n Prior approaches have relied on string-based similarity which only compare a candidate string and an entity it matches with. In this paper, we exploit web search engines in order to define new similarity functions. We then develop efficient techniques to facilitate approximate matching in the context of our proposed similarity functions. In an extensive experimental evaluation, we demonstrate the accuracy and efficiency of our techniques.",
"title": ""
},
{
"docid": "4bf485a218fca405a4d8655bc2a2be86",
"text": "In today’s competitive business environment, companies are facing challenges in dealing with big data issues for rapid decision making for improved productivity. Many manufacturing systems are not ready to manage big data due to the lack of smart analytics tools. Germany is leading a transformation toward 4th Generation Industrial Revolution (Industry 4.0) based on Cyber-Physical System based manufacturing and service innovation. As more software and embedded intelligence are integrated in industrial products and systems, predictive technologies can further intertwine intelligent algorithms with electronics and tether-free intelligence to predict product performance degradation and autonomously manage and optimize product service needs. This article addresses the trends of industrial transformation in big data environment as well as the readiness of smart predictive informatics tools to manage big data to achieve transparency and productivity. Keywords—Industry 4.0; Cyber Physical Systems; Prognostics and Health Management; Big Data;",
"title": ""
},
{
"docid": "63bc96e4a3f44ae1e7fc3e543363dcdd",
"text": "The involvement of smart grid in home and building automation systems has led to the development of diverse standards for interoperable products to control appliances, lighting, energy management and security. Smart grid enables a user to control the energy usage according to the price and demand. These standards have been developed in parallel by different organizations, which are either open or proprietary. It is necessary to arrange these standards in such a way that it is easier for potential readers to easily understand and select a suitable standard according to their functionalities without going into the depth of each standard. In this paper, we review the main smart grid standards proposed by different organizations for home and building automation in terms of different function fields. In addition, we evaluate the scope of interoperability, benefits and drawbacks of the standard.",
"title": ""
},
{
"docid": "98162dc86a2c70dd55e7e3e996dc492c",
"text": "PURPOSE\nTo evaluate gastroesophageal reflux disease (GERD) symptoms, patient satisfaction, and antisecretory drug use in a large group of GERD patients treated with the Stretta procedure (endoluminal temperature-controlled radiofrequency energy for the treatment of GERD) at multiple centers since February 1999.\n\n\nMETHODS\nAll subjects provided informed consent. A health care provider from each institution administered a standardized GERD survey to patients who had undergone Stretta. Subjects provided (at baseline and follow-up) (1) GERD severity (none, mild, moderate, severe), (2) percentage of GERD symptom control, (3) satisfaction, and (4) antisecretory medication use. Outcomes were compared with the McNemar test, paired t test, and Wilcoxon signed rank test.\n\n\nRESULTS\nSurveys of 558 patients were evaluated (33 institutions, mean follow-up of 8 months). Most patients (76%) were dissatisfied with baseline antisecretory therapy for GERD. After treatment, onset of GERD relief was less than 2 months (68.7%) or 2 to 6 months (14.6%). The median drug requirement improved from proton pump inhibitors twice daily to antacids as needed (P < .0001). The percentage of patients with satisfactory GERD control (absent or mild) improved from 26.3% at baseline (on drugs) to 77.0% after Stretta (P < .0001). Median baseline symptom control on drugs was 50%, compared with 90% at follow-up (P < .0001). Baseline patient satisfaction on drugs was 23.2%, compared with 86.5% at follow-up (P < .0001). Subgroup analysis (<1 year vs. >1 year of follow-up) showed a superior effect on symptom control and drug use in those patients beyond 1 year of follow-up, supporting procedure durability.\n\n\nCONCLUSIONS\nThe Stretta procedure results in significant GERD symptom control and patient satisfaction, superior to that derived from drug therapy in this study group. The treatment effect is durable beyond 1 year, and most patients were off all antisecretory drugs at follow-up. These results support the use of the Stretta procedure for patients with GERD, particularly those with inadequate control of symptoms on medical therapy.",
"title": ""
},
{
"docid": "bade30443df95aafc6958547afd5628a",
"text": "This paper details a series of pre-professional development interventions to assist teachers in utilizing computational thinking and programming as an instructional tool within other subject areas (i.e. music, language arts, mathematics, and science). It describes the lessons utilized in the interventions along with the instruments used to evaluate them, and offers some preliminary findings.",
"title": ""
},
{
"docid": "e306933b27867c99585d7fc82cc380ff",
"text": "We introduce a new OS abstraction—light-weight contexts (lwCs)—that provides independent units of protection, privilege, and execution state within a process. A process may include several lwCs, each with possibly different views of memory, file descriptors, and access capabilities. lwCs can be used to efficiently implement roll-back (process can return to a prior recorded state), isolated address spaces (lwCs within the process may have different views of memory, e.g., isolating sensitive data from network-facing components or isolating different user sessions), and privilege separation (in-process reference monitors can arbitrate and control access). lwCs can be implemented efficiently: the overhead of a lwC is proportional to the amount of memory exclusive to the lwC; switching lwCs is quicker than switching kernel threads within the same process. We describe the lwC abstraction and API, and an implementation of lwCs within the FreeBSD 11.0 kernel. Finally, we present an evaluation of common usage patterns, including fast rollback, session isolation, sensitive data isolation, and inprocess reference monitoring, using Apache, nginx, PHP, and OpenSSL.",
"title": ""
},
{
"docid": "a6889a6dd3dbdc4488ba01653acbc386",
"text": "OBJECTIVE\nTo succinctly summarise five contemporary theories about motivation to learn, articulate key intersections and distinctions among these theories, and identify important considerations for future research.\n\n\nRESULTS\nMotivation has been defined as the process whereby goal-directed activities are initiated and sustained. In expectancy-value theory, motivation is a function of the expectation of success and perceived value. Attribution theory focuses on the causal attributions learners create to explain the results of an activity, and classifies these in terms of their locus, stability and controllability. Social- cognitive theory emphasises self-efficacy as the primary driver of motivated action, and also identifies cues that influence future self-efficacy and support self-regulated learning. Goal orientation theory suggests that learners tend to engage in tasks with concerns about mastering the content (mastery goal, arising from a 'growth' mindset regarding intelligence and learning) or about doing better than others or avoiding failure (performance goals, arising from a 'fixed' mindset). Finally, self-determination theory proposes that optimal performance results from actions motivated by intrinsic interests or by extrinsic values that have become integrated and internalised. Satisfying basic psychosocial needs of autonomy, competence and relatedness promotes such motivation. Looking across all five theories, we note recurrent themes of competence, value, attributions, and interactions between individuals and the learning context.\n\n\nCONCLUSIONS\nTo avoid conceptual confusion, and perhaps more importantly to maximise the theory-building potential of their work, researchers must be careful (and precise) in how they define, operationalise and measure different motivational constructs. We suggest that motivation research continue to build theory and extend it to health professions domains, identify key outcomes and outcome measures, and test practical educational applications of the principles thus derived.",
"title": ""
},
{
"docid": "32ce76009016ba30ce38524b7e9071c9",
"text": "Error in medicine is a subject of continuing interest among physicians, patients, policymakers, and the general public. This article examines the issue of disclosure of medical errors in the context of emergency medicine. It reviews the concept of medical error; proposes the professional duty of truthfulness as a justification for error disclosure; examines barriers to error disclosure posed by health care systems, patients, physicians, and the law; suggests system changes to address the issue of medical error; offers practical guidelines to promote the practice of error disclosure; and discusses the issue of disclosure of errors made by another physician.",
"title": ""
},
{
"docid": "e158971a53492c6b9e21116da891de1a",
"text": "A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for the ability of these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, and neural network theory, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new algorithm, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. We apply this algorithm to deep neural network training, and provide preliminary numerical evidence for its superior performance.",
"title": ""
},
{
"docid": "c2055f8366e983b45d8607c877126797",
"text": "This paper proposes and investigates an offline finite-element-method (FEM)-assisted position and speed observer for brushless dc permanent-magnet (PM) (BLDC-PM) motor drive sensorless control based on the line-to-line PM flux linkage estimation. The zero crossing of the line-to-line PM flux linkage occurs right in the middle of two commutation points (CPs) and is used as a basis for the position and speed observer. The position between CPs is obtained by comparing the estimated line-to-line PM flux with the FEM-calculated line-to-line PM flux. Even if the proposed observer relies on the fundamental model of the machine, a safe starting strategy under heavy load torque, called I-f control, is used, with seamless transition to the proposed sensorless control. The I-f starting method allows low-speed sensorless control, without knowing the initial position and without machine parameter identification. Digital simulations and experimental results are shown, demonstrating the reliability of the FEM-assisted position and speed observer for BLDC-PM motor sensorless control operation.",
"title": ""
},
{
"docid": "0ede49c216f911cd01b3bfcf0c539d6e",
"text": "Distribution patterns along a slope and vertical root distribution were compared among seven major woody species in a secondary forest of the warm-temperate zone in central Japan in relation to differences in soil moisture profiles through a growing season among different positions along the slope. Pinus densiflora, Juniperus rigida, Ilex pedunculosa and Lyonia ovalifolia, growing mostly on the upper part of the slope with shallow soil depth had shallower roots. Quercus serrata and Quercus glauca, occurring mostly on the lower slope with deep soil showed deeper rooting. Styrax japonica, mainly restricted to the foot slope, had shallower roots in spite of growing on the deepest soil. These relations can be explained by the soil moisture profile under drought at each position on the slope. On the upper part of the slope and the foot slope, deep rooting brings little advantage in water uptake from the soil due to the total drying of the soil and no period of drying even in the shallow soil, respectively. However, deep rooting is useful on the lower slope where only the deep soil layer keeps moist. This was supported by better diameter growth of a deep-rooting species on deeper soil sites than on shallower soil sites, although a shallow-rooting species showed little difference between them.",
"title": ""
},
{
"docid": "43c49bb7d9cebb8f476079ac9dd0af27",
"text": "Nowadays, most recommender systems (RSs) mainly aim to suggest appropriate items for individuals. Due to the social nature of human beings, group activities have become an integral part of our daily life, thus motivating the study on group RS (GRS). However, most existing methods used by GRS make recommendations through aggregating individual ratings or individual predictive results rather than considering the collective features that govern user choices made within a group. As a result, such methods are heavily sensitive to data, hence they often fail to learn group preferences when the data are slightly inconsistent with predefined aggregation assumptions. To this end, we devise a novel GRS approach which accommodates both individual choices and group decisions in a joint model. More specifically, we propose a deep-architecture model built with collective deep belief networks and dual-wing restricted Boltzmann machines. With such a deep model, we can use high-level features, which are induced from lower-level features, to represent group preference so as to relieve the vulnerability of data. Finally, the experiments conducted on a real-world dataset prove the superiority of our deep model over other state-of-the-art methods.",
"title": ""
}
] |
scidocsrr
|
36a0aa0c3466f4ff1b9199f09b78244d
|
Distributed Agile Development: Using Scrum in a Large Project
|
[
{
"docid": "3effd14a76c134d741e6a1f4fda022dc",
"text": "Agile processes rely on feedback and communication to work and they often work best with co-located teams for this reason. Sometimes agile makes sense because of project requirements and a distributed team makes sense because of resource constraints. A distributed team can be productive and fun to work on if the team takes an active role in overcoming the barriers that distribution causes. This is the story of how one product team used creativity, communications tools, and basic good engineering practices to build a successful product.",
"title": ""
}
] |
[
{
"docid": "6b7de13e2e413885e0142e3b6bf61dc9",
"text": "OBJECTIVE\nTo compare the healing at elevated sinus floors augmented either with deproteinized bovine bone mineral (DBBM) or autologous bone grafts and followed by immediate implant installation.\n\n\nMATERIAL AND METHODS\nTwelve albino New Zealand rabbits were used. Incisions were performed along the midline of the nasal dorsum. The nasal bone was exposed. A circular bony widow with a diameter of 3 mm was prepared bilaterally, and the sinus mucosa was detached. Autologous bone (AB) grafts were collected from the tibia. Similar amounts of AB or DBBM granules were placed below the sinus mucosa. An implant with a moderately rough surface was installed into the elevated sinus bilaterally. The animals were sacrificed after 7 (n = 6) or 40 days (n = 6).\n\n\nRESULTS\nThe dimensions of the elevated sinus space at the DBBM sites were maintained, while at the AB sites, a loss of 2/3 was observed between 7 and 40 days of healing. The implants showed similar degrees of osseointegration after 7 (7.1 ± 1.7%; 9.9 ± 4.5%) and 40 days (37.8 ± 15%; 36.0 ± 11.4%) at the DBBM and AB sites, respectively. Similar amounts of newly formed mineralized bone were found in the elevated space after 7 days at the DBBM (7.8 ± 6.6%) and AB (7.2 ± 6.0%) sites while, after 40 days, a higher percentage of bone was found at AB (56.7 ± 8.8%) compared to DBBM (40.3 ± 7.5%) sites.\n\n\nCONCLUSIONS\nBoth Bio-Oss® granules and autologous bone grafts contributed to the healing at implants installed immediately in elevated sinus sites in rabbits. Bio-Oss® maintained the dimensions, while autologous bone sites lost 2/3 of the volume between the two periods of observation.",
"title": ""
},
{
"docid": "4863316487ead3c1b459dc43d9ed17d5",
"text": "The advent of social networks poses severe threats on user privacy as adversaries can de-anonymize users' identities by mapping them to correlated cross-domain networks. Without ground-truth mapping, prior literature proposes various cost functions in hope of measuring the quality of mappings. However, there is generally a lacking of rationale behind the cost functions, whose minimizer also remains algorithmically unknown. We jointly tackle above concerns under a more practical social network model parameterized by overlapping communities, which, neglected by prior art, can serve as side information for de-anonymization. Regarding the unavailability of ground-truth mapping to adversaries, by virtue of the Minimum Mean Square Error (MMSE), our first contribution is a well-justified cost function minimizing the expected number of mismatched users over all possible true mappings. While proving the NP-hardness of minimizing MMSE, we validly transform it into the weighted-edge matching problem (WEMP), which, as disclosed theoretically, resolves the tension between optimality and complexity: (i) WEMP asymptotically returns a negligible mapping error in large network size under mild conditions facilitated by higher overlapping strength; (ii) WEMP can be algorithmically characterized via the convex-concave based de-anonymization algorithm (CBDA), finding the optimum of WEMP. Extensive experiments further confirm the effectiveness of CBDA under overlapping communities, in terms of averagely 90% re-identified users in the rare true cross-domain co-author networks when communities overlap densely, and roughly 70% enhanced reidentification ratio compared to non-overlapping cases.",
"title": ""
},
{
"docid": "d8b19c953cc66b6157b87da402dea98a",
"text": "In this paper we propose a new semi-supervised GAN architecture (ss-InfoGAN) for image synthesis that leverages information from few labels (as little as 0.22%, max. 10% of the dataset) to learn semantically meaningful and controllable data representations where latent variables correspond to label categories. The architecture builds on Information Maximizing Generative Adversarial Networks (InfoGAN) and is shown to learn both continuous and categorical codes and achieves higher quality of synthetic samples compared to fully unsupervised settings. Furthermore, we show that using small amounts of labeled data speeds-up training convergence. The architecture maintains the ability to disentangle latent variables for which no labels are available. Finally, we contribute an information-theoretic reasoning on how introducing semi-supervision increases mutual information between synthetic and real data.",
"title": ""
},
{
"docid": "f67b0131de800281ceb8522198302cde",
"text": "This brief presents an approach for identifying the parameters of linear time-varying systems that repeat their trajectories. The identification is based on the concept that parameter identification results can be improved by incorporating information learned from previous executions. The learning laws for this iterative learning identification are determined through an optimization framework. The convergence analysis of the algorithm is presented along with the experimental results to demonstrate its effectiveness. The algorithm is demonstrated to be capable of simultaneously estimating rapidly varying parameters and addressing robustness to noise by adopting a time-varying design approach.",
"title": ""
},
{
"docid": "98af4946349aabb95d98dde19344cd4f",
"text": "Automatic speaker recognition is a field of study attributed in identifying a person from a spoken phrase. The technique makes it possible to use the speaker’s voice to verify their identity and control access to the services such as biometric security system, voice dialing, telephone banking, telephone shopping, database access services, information services, voice mail, and security control for confidential information areas and remote access to the computers. This thesis represents a development of a Matlab based text dependent speaker recognition system. Mel Frequency Cepstrum Coefficient (MFCC) Method is used to extract a speaker’s discriminative features from the mathematical representation of the speech signal. After that Vector Quantization with VQ-LBG Algorithm is used to match the feature. Key-Words: Speaker Recognition, Human Speech Signal Processing, Vector Quantization",
"title": ""
},
{
"docid": "b4edd546c786bbc7a72af67439dfcad7",
"text": "We aim to develop a computationally feasible, cognitivelyinspired, formal model of concept invention, drawing on Fauconnier and Turner’s theory of conceptual blending, and grounding it on a sound mathematical theory of concepts. Conceptual blending, although successfully applied to describing combinational creativity in a varied number of fields, has barely been used at all for implementing creative computational systems, mainly due to the lack of sufficiently precise mathematical characterisations thereof. The model we will define will be based on Goguen’s proposal of a Unified Concept Theory, and will draw from interdisciplinary research results from cognitive science, artificial intelligence, formal methods and computational creativity. To validate our model, we will implement a proof of concept of an autonomous computational creative system that will be evaluated in two testbed scenarios: mathematical reasoning and melodic harmonisation. We envisage that the results of this project will be significant for gaining a deeper scientific understanding of creativity, for fostering the synergy between understanding and enhancing human creativity, and for developing new technologies for autonomous creative systems.",
"title": ""
},
{
"docid": "126007e4cb2a200725868551a11267c8",
"text": "One of the most important challenges of this decade is the Internet of Things (IoT), which aims to enable things to be connected anytime, anyplace, with anything and anyone, ideally using any path/network and any service. IoT systems are usually composed of heterogeneous and interconnected lightweight devices that support applications that are subject to change in their external environment and in the functioning of these devices. The management of the variability of these changes, autonomously, is a challenge in the development of these systems. Agents are a good option for developing self-managed IoT systems due to their distributed nature, context-awareness and self-adaptation. Our goal is to enhance the development of IoT applications using agents and software product lines (SPL). Specifically, we propose to use Self-StarMASMAS, multi-agent system) agents and to define an SPL process using the Common Variability Language. In this contribution, we propose an SPL process for Self-StarMAS, paying particular attention to agents embedded in sensor motes.",
"title": ""
},
{
"docid": "d54e3e53b6aa5fad2949348ebe1523d2",
"text": "Optimizing the operation of cooperative multi-robot systems that can cooperatively act in large and complex environments has become an important focal area of research. This issue is motivated by many applications involving a set of cooperative robots that have to decide in a decentralized way how to execute a large set of tasks in partially observable and uncertain environments. Such decision problems are encountered while developing exploration rovers, team of patrolling robots, rescue-robot colonies, mine-clearance robots, etc. In this chapter, we introduce problematics related to the decentralized control of multi-robot systems. We first describe some applicative domains and review the main characteristics of the decision problems the robots must deal with. Then, we review some existing approaches to solve problems of multiagent decentralized control in stochastic environments. We present the Decentralized Markov Decision Processes and discuss their applicability to real-world multi-robot applications. Then, we introduce OC-DEC-MDPs and 2V-DEC-MDPs which have been developed to increase the applicability of DEC-MDPs.",
"title": ""
},
{
"docid": "0994f1ccf2f0157554c93d7814772200",
"text": "Botanical insecticides presently play only a minor role in insect pest management and crop protection; increasingly stringent regulatory requirements in many jurisdictions have prevented all but a handful of botanical products from reaching the marketplace in North America and Europe in the past 20 years. Nonetheless, the regulatory environment and public health needs are creating opportunities for the use of botanicals in industrialized countries in situations where human and animal health are foremost--for pest control in and around homes and gardens, in commercial kitchens and food storage facilities and on companion animals. Botanicals may also find favour in organic food production, both in the field and in controlled environments. In this review it is argued that the greatest benefits from botanicals might be achieved in developing countries, where human pesticide poisonings are most prevalent. Recent studies in Africa suggest that extracts of locally available plants can be effective as crop protectants, either used alone or in mixtures with conventional insecticides at reduced rates. These studies suggest that indigenous knowledge and traditional practice can make valuable contributions to domestic food production in countries where strict enforcement of pesticide regulations is impractical.",
"title": ""
},
{
"docid": "63cfadd9a71aaa1cbe1ead79f943f83c",
"text": "Persuasiveness is a high-level personality trait that quantifies the influence a speaker has on the beliefs, attitudes, intentions, motivations, and behavior of the audience. With social multimedia becoming an important channel in propagating ideas and opinions, analyzing persuasiveness is very important. In this work, we use the publicly available Persuasive Opinion Multimedia (POM) dataset to study persuasion. One of the challenges associated with this problem is the limited amount of annotated data. To tackle this challenge, we present a deep multimodal fusion architecture which is able to leverage complementary information from individual modalities for predicting persuasiveness. Our methods show significant improvement in performance over previous approaches.",
"title": ""
},
{
"docid": "a3e1eb38273f67a283063bce79b20b9d",
"text": "In this article, we examine the impact of digital screen devices, including television, on cognitive development. Although we know that young infants and toddlers are using touch screen devices, we know little about their comprehension of the content that they encounter on them. In contrast, research suggests that children begin to comprehend child-directed television starting at ∼2 years of age. The cognitive impact of these media depends on the age of the child, the kind of programming (educational programming versus programming produced for adults), the social context of viewing, as well the particular kind of interactive media (eg, computer games). For children <2 years old, television viewing has mostly negative associations, especially for language and executive function. For preschool-aged children, television viewing has been found to have both positive and negative outcomes, and a large body of research suggests that educational television has a positive impact on cognitive development. Beyond the preschool years, children mostly consume entertainment programming, and cognitive outcomes are not well explored in research. The use of computer games as well as educational computer programs can lead to gains in academically relevant content and other cognitive skills. This article concludes by identifying topics and goals for future research and provides recommendations based on current research-based knowledge.",
"title": ""
},
{
"docid": "c3c5168b77603b4bb548e1b8f7d9af50",
"text": "The concept of agile domain name system (DNS) refers to dynamic and rapidly changing mappings between domain names and their Internet protocol (IP) addresses. This empirical paper evaluates the bias from this kind of agility for DNS-based graph theoretical data mining applications. By building on two conventional metrics for observing malicious DNS agility, the agility bias is observed by comparing bipartite DNS graphs to different subgraphs from which vertices and edges are removed according to two criteria. According to an empirical experiment with two longitudinal DNS datasets, irrespective of the criterion, the agility bias is observed to be severe particularly regarding the effect of outlying domains hosted and delivered via content delivery networks and cloud computing services. With these observations, the paper contributes to the research domains of cyber security and DNS mining. In a larger context of applied graph mining, the paper further elaborates the practical concerns related to the learning of large and dynamic bipartite graphs.",
"title": ""
},
{
"docid": "228d7fa684e1caf43769fa13818b938f",
"text": "Optimal tuning of proportional-integral-derivative (PID) controller parameters is necessary for the satisfactory operation of automatic voltage regulator (AVR) system. This study presents a tuning fuzzy logic approach to determine the optimal PID controller parameters in AVR system. The developed fuzzy system can give the PID parameters on-line for different operating conditions. The suitability of the proposed approach for PID controller tuning has been demonstrated through computer simulations in an AVR system.",
"title": ""
},
{
"docid": "cf7fc0d2f9d2d78b1777254e37d6ceb7",
"text": "Nanoscale drug delivery systems using liposomes and nanoparticles are emerging technologies for the rational delivery of chemotherapeutic drugs in the treatment of cancer. Their use offers improved pharmacokinetic properties, controlled and sustained release of drugs and, more importantly, lower systemic toxicity. The commercial availability of liposomal Doxil and albumin-nanoparticle-based Abraxane has focused attention on this innovative and exciting field. Recent advances in liposome technology offer better treatment of multidrug-resistant cancers and lower cardiotoxicity. Nanoparticles offer increased precision in chemotherapeutic targeting of prostate cancer and new avenues for the treatment of breast cancer. Here we review current knowledge on the two technologies and their potential applications to cancer treatment.",
"title": ""
},
{
"docid": "577857c80b11e2e2451d00f9b1f37c85",
"text": "Real-world environments are typically dynamic, complex, and multisensory in nature and require the support of top–down attention and memory mechanisms for us to be able to drive a car, make a shopping list, or pour a cup of coffee. Fundamental principles of perception and functional brain organization have been established by research utilizing well-controlled but simplified paradigms with basic stimuli. The last 30 years ushered a revolution in computational power, brain mapping, and signal processing techniques. Drawing on those theoretical and methodological advances, over the years, research has departed more and more from traditional, rigorous, and well-understood paradigms to directly investigate cognitive functions and their underlying brain mechanisms in real-world environments. These investigations typically address the role of one or, more recently, multiple attributes of real-world environments. Fundamental assumptions about perception, attention, or brain functional organization have been challenged—by studies adapting the traditional paradigms to emulate, for example, the multisensory nature or varying relevance of stimulation or dynamically changing task demands. Here, we present the state of the field within the emerging heterogeneous domain of real-world neuroscience. To be precise, the aim of this Special Focus is to bring together a variety of the emerging “real-world neuroscientific” approaches. These approaches differ in their principal aims, assumptions, or even definitions of “real-world neuroscience” research. Here, we showcase the commonalities and distinctive features of the different “real-world neuroscience” approaches. To do so, four early-career researchers and the speakers of the Cognitive Neuroscience Society 2017 Meeting symposium under the same title answer questions pertaining to the added value of such approaches in bringing us closer to accurate models of functional brain organization and cognitive functions.",
"title": ""
},
{
"docid": "b1ef75c4a0dc481453fb68e94ec70cdc",
"text": "Autonomous Land Vehicles (ALVs), due to their considerable potential applications in areas such as mining and defence, are currently the focus of intense research at robotics institutes worldwide. Control systems that provide reliable navigation, often in complex or previously unknown environments, is a core requirement of any ALV implementation. Three key aspects for the provision of such autonomous systems are: 1) path planning, 2) obstacle avoidance, and 3) path following. The work presented in this thesis, under the general umbrella of the ACFR’s own ALV project, the ‘High Speed Vehicle Project’, addresses these three mobile robot competencies in the context of an ALV based system. As such, it develops both the theoretical concepts and the practical components to realise an initial, fully functional implementation of such a system. This system, which is implemented on the ACFR’s (ute) test vehicle, allows the user to enter a trajectory and follow it, while avoiding any detected obstacles along the path.",
"title": ""
},
{
"docid": "e44fefc20ed303064dabff3da1004749",
"text": "Printflatables is a design and fabrication system for human-scale, functional and dynamic inflatable objects. We use inextensible thermoplastic fabric as the raw material with the key principle of introducing folds and thermal sealing. Upon inflation, the sealed object takes the expected three dimensional shape. The workflow begins with the user specifying an intended 3D model which is decomposed to two dimensional fabrication geometry. This forms the input for a numerically controlled thermal contact iron that seals layers of thermoplastic fabric. In this paper, we discuss the system design in detail, the pneumatic primitives that this technique enables and merits of being able to make large, functional and dynamic pneumatic artifacts. We demonstrate the design output through multiple objects which could motivate fabrication of inflatable media and pressure-based interfaces.",
"title": ""
},
{
"docid": "78bf0b1d4065fd0e1740589c4e060c70",
"text": "This paper presents an efficient metric for quantifying the visual fidelity of natural images based on near-threshold and suprathreshold properties of human vision. The proposed metric, the visual signal-to-noise ratio (VSNR), operates via a two-stage approach. In the first stage, contrast thresholds for detection of distortions in the presence of natural images are computed via wavelet-based models of visual masking and visual summation in order to determine whether the distortions in the distorted image are visible. If the distortions are below the threshold of detection, the distorted image is deemed to be of perfect visual fidelity (VSNR = infin)and no further analysis is required. If the distortions are suprathreshold, a second stage is applied which operates based on the low-level visual property of perceived contrast, and the mid-level visual property of global precedence. These two properties are modeled as Euclidean distances in distortion-contrast space of a multiscale wavelet decomposition, and VSNR is computed based on a simple linear sum of these distances. The proposed VSNR metric is generally competitive with current metrics of visual fidelity; it is efficient both in terms of its low computational complexity and in terms of its low memory requirements; and it operates based on physical luminances and visual angle (rather than on digital pixel values and pixel-based dimensions) to accommodate different viewing conditions.",
"title": ""
},
{
"docid": "94d3d6c32b5c0f08722b2ecbe6692969",
"text": "A 64% instantaneous bandwidth scalable turnstile-based orthomode transducer to be used in the so-called extended C-band satellite link is presented. The proposed structure overcomes the current practical bandwidth limitations by adding a single-step widening at the junction of the four output rectangular waveguides. This judicious modification, together with the use of reduced-height waveguides and E-plane bends and power combiners, enables to approach the theoretical structure bandwidth limit with a simple, scalable and compact design. The presented orthomode transducer architecture exhibits a return loss better than 25 dB, an isolation between rectangular ports better than 50 dB and a transmission loss less than 0.04 dB in the 3.6-7 GHz range, which represents state-of-the-art achievement in terms of bandwidth.",
"title": ""
},
{
"docid": "5ddd9074ed765d0f2dc44a4fd9632fe5",
"text": "The complexity in electrical power systems is continuously increasing due to its advancing distribution. This affects the topology of the grid infrastructure as well as the IT-infrastructure, leading to various heterogeneous systems, data models, protocols, and interfaces. This in turn raises the need for appropriate processes and tools that facilitate the management of the systems architecture on different levels and from different stakeholders’ view points. In order to achieve this goal, a common understanding of architecture elements and means of classification shall be gained. The Smart Grid Architecture Model (SGAM) proposed in context of the European standardization mandate M/490 provides a promising basis for domainspecific architecture models. The idea of following a Model-Driven-Architecture (MDA)-approach to create such models, including requirements specification based on Smart Grid use cases, is detailed in this contribution. The SGAM-Toolbox is introduced as tool-support for the approach and its utility is demonstrated by two real-world case studies.",
"title": ""
}
] |
scidocsrr
|
07b5970a870965f31a550df1161ec716
|
An effective and fast iris recognition system based on a combined multiscale feature extraction technique
|
[
{
"docid": "b125649628d46871b2212c61e355ec43",
"text": "AbstructA method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: An estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degrees-of-freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabecular meshwork ensures that a test of statistical independence on two coded patterns originating from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a person’s iris in a real-time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most-significant bits comprise a 256-byte “iris code.” Statistical decision theory generates identification decisions from ExclusiveOR comparisons of complete iris codes at the rate of 4000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical “cross-over” error rate of one in 131000 when a decision criterion is adopted that would equalize the false accept and false reject error rates. In the typical recognition case, given the mean observed degree of iris code agreement, the decision confidence levels correspond formally to a conditional false accept probability of one in about lo”’.",
"title": ""
},
{
"docid": "d82e41bcf0d25a728ddbad1dd875bd16",
"text": "With an increasing emphasis on security, automated personal identification based on biometrics has been receiving extensive attention over the past decade. Iris recognition, as an emerging biometric recognition approach, is becoming a very active topic in both research and practical applications. In general, a typical iris recognition system includes iris imaging, iris liveness detection, and recognition. This paper focuses on the last issue and describes a new scheme for iris recognition from an image sequence. We first assess the quality of each image in the input sequence and select a clear iris image from such a sequence for subsequent recognition. A bank of spatial filters, whose kernels are suitable for iris recognition, is then used to capture local characteristics of the iris so as to produce discriminating texture features. Experimental results show that the proposed method has an encouraging performance. In particular, a comparative study of existing methods for iris recognition is conducted on an iris image database including 2,255 sequences from 213 subjects. Conclusions based on such a comparison using a nonparametric statistical method (the bootstrap) provide useful information for further research.",
"title": ""
}
] |
[
{
"docid": "f01970e430e1dce15efdd5a054bd9c2a",
"text": "Hierarchical text categorization (HTC) refers to assigning a text document to one or more most suitable categories from a hierarchical category space. In this paper we present two HTC techniques based on kNN and SVM machine learning techniques for categorization process and byte n-gram based document representation. They are fully language independent and do not require any text preprocessing steps, or any prior information about document content or language. The effectiveness of the presented techniques and their language independence are demonstrated in experiments performed on five tree-structured benchmark category hierarchies that differ in many aspects: Reuters-Hier1, Reuters-Hier2, 15NGHier and 20NGHier in English and TanCorpHier in Chinese. The results obtained are compared with the corresponding flat categorization techniques applied to leaf level categories of the considered hierarchies. While kNN-based flat text categorization produced slightly better results than kNN-based HTC on the largest TanCorpHier and 20NGHier datasets, SVM-based HTC results do not considerably differ from the corresponding flat techniques, due to shallow hierarchies; still, they outperform both kNN-based flat and hierarchical categorization on all corpora except the smallest Reuters-Hier1 and Reuters-Hier2 datasets. Formal evaluation confirmed that the proposed techniques obtained state-of-the-art results.",
"title": ""
},
{
"docid": "5178bd7051b15f9178f9d19e916fdc85",
"text": "With more and more functions in modern battery-powered mobile devices, enabling light-harvesting in the power management system can extend battery usage time [1]. For both indoor and outdoor operations of mobile devices, the output power range of the solar panel with the size of a touchscreen can vary from 100s of µW to a Watt due to the irradiance-level variation. An energy harvester is thus essential to achieve high maximum power-point tracking efficiency (ηT) over this wide power range. However, state-of-the-art energy harvesters only use one maximum power-point tracking (MPPT) method under different irradiance levels as shown in Fig. 22.5.1 [2–5]. Those energy harvesters with power-computation-based MPPT schemes for portable [2,3] and standalone [4] systems suffer from low ηT under low input power due to the limited input dynamic range of the MPPT circuitry. Other low-power energy harvesters with the fractional open-cell voltage (FOCV) MPPT scheme are confined by the fractional-constant accuracy to only offer high ηT across a narrow power range [5]. Additionally, the conventional FOCV MPPT scheme requires long transient time of 250ms to identify MPP [5], thereby significantly reducing energy capture from the solar panel. To address the above issues, this paper presents an energy harvester with an irradiance-aware hybrid algorithm (IAHA) to automatically switch between an auto-zeroed pulse-integration based MPPT (AZ PI-MPPT) and a slew-rate-enhanced FOCV (SRE-FOCV) MPPT scheme for maximizing ηT under different irradiance levels. The SRE-FOCV MPPT scheme also enables the energy harvester to shorten the MPPT transient time to 2.9ms in low irradiance levels.",
"title": ""
},
{
"docid": "fe400cd7cd0e63dfe4510e1357172d72",
"text": "Accurate segmentation of anatomical structures in chest radiographs is essential for many computer-aided diagnosis tasks. In this paper we investigate the latest fully-convolutional architectures for the task of multi-class segmentation of the lungs field, heart and clavicles in a chest radiograph. In addition, we explore the influence of using different loss functions in the training process of a neural network for semantic segmentation. We evaluate all models on a common benchmark of 247 X-ray images from the JSRT database and ground-truth segmentation masks from the SCR dataset. Our best performing architecture, is a modified U-Net that benefits from pre-trained encoder weights. This model outperformed the current state-of-the-art methods tested on the same benchmark, with Jaccard overlap scores of 96.1% for lung fields, 90.6% for heart and 85.5% for clavicles.",
"title": ""
},
{
"docid": "98ee8e95912382d6c2aaaa40c4117bbb",
"text": "There are various undergoing efforts by system operators to set up an electricity market at the distribution level to enable a rapid and widespread deployment of distributed energy resources (DERs) and microgrids. This paper follows the previous work of the authors in implementing the distribution market operator (DMO) concept, and focuses on investigating the clearing and settlement processes performed by the DMO. The DMO clears the market to assign the awarded power from the wholesale market to customers within its service territory based on their associated demand bids. The DMO accordingly settles the market to identify the distribution locational marginal prices (DLMPs) and calculate payments from each customer and the total payment to the system operator. Numerical simulations exhibit the merits and effectiveness of the proposed DMO clearing and settlement processes.",
"title": ""
},
{
"docid": "cebdedb344f2ba7efb95c2933470e738",
"text": "To address this shortcoming, we propose a method for training binary neural networks with a mixture of bits, yielding effectively fractional bitwidths. We demonstrate that our method is not only effective in allowing finer tuning of the speed to accuracy trade-off, but also has inherent representational advantages. Middle-Out Algorithm Heterogeneous Bitwidth Binarization in Convolutional Neural Networks",
"title": ""
},
{
"docid": "94d14d837104590fcf9055351fa59482",
"text": "The amygdala receives cortical inputs from the medial prefrontal cortex (mPFC) and orbitofrontal cortex (OFC) that are believed to affect emotional control and cue-outcome contingencies, respectively. Although mPFC impact on the amygdala has been studied, how the OFC modulates mPFC-amygdala information flow, specifically the infralimbic (IL) division of mPFC, is largely unknown. In this study, combined in vivo extracellular single-unit recordings and pharmacological manipulations were used in anesthetized rats to examine how OFC modulates amygdala neurons responsive to mPFC activation. Compared with basal condition, pharmacological (N-Methyl-D-aspartate) or electrical activation of the OFC exerted an inhibitory modulation of the mPFC-amygdala pathway, which was reversed with intra-amygdala blockade of GABAergic receptors with combined GABAA and GABAB antagonists (bicuculline and saclofen). Moreover, potentiation of the OFC-related pathways resulted in a loss of OFC control over the mPFC-amygdala pathway. These results show that the OFC potently inhibits mPFC drive of the amygdala in a GABA-dependent manner; but with extended OFC pathway activation this modulation is lost. Our results provide a circuit-level basis for this interaction at the level of the amygdala, which would be critical in understanding the normal and pathophysiological control of emotion and contingency associations regulating behavior.",
"title": ""
},
{
"docid": "be5d3179bc0198505728a5b27dcf1a89",
"text": "Emotion has been investigated from various perspectives and across several domains within human computer interaction (HCI) including intelligent tutoring systems, interactive web applications, social media and human-robot interaction. One of the most promising and, nevertheless, challenging applications of affective computing (AC) research is within computer games. This chapter focuses on the study of emotion in the computer games domain, reviews seminal work at the crossroads of game technology, game design and affective computing and details the key phases for efficient affectbased interaction in games.",
"title": ""
},
{
"docid": "a5e01cfeb798d091dd3f2af1a738885b",
"text": "It is shown by an extensive benchmark on molecular energy data that the mathematical form of the damping function in DFT-D methods has only a minor impact on the quality of the results. For 12 different functionals, a standard \"zero-damping\" formula and rational damping to finite values for small interatomic distances according to Becke and Johnson (BJ-damping) has been tested. The same (DFT-D3) scheme for the computation of the dispersion coefficients is used. The BJ-damping requires one fit parameter more for each functional (three instead of two) but has the advantage of avoiding repulsive interatomic forces at shorter distances. With BJ-damping better results for nonbonded distances and more clear effects of intramolecular dispersion in four representative molecular structures are found. For the noncovalently-bonded structures in the S22 set, both schemes lead to very similar intermolecular distances. For noncovalent interaction energies BJ-damping performs slightly better but both variants can be recommended in general. The exception to this is Hartree-Fock that can be recommended only in the BJ-variant and which is then close to the accuracy of corrected GGAs for non-covalent interactions. According to the thermodynamic benchmarks BJ-damping is more accurate especially for medium-range electron correlation problems and only small and practically insignificant double-counting effects are observed. It seems to provide a physically correct short-range behavior of correlation/dispersion even with unmodified standard functionals. In any case, the differences between the two methods are much smaller than the overall dispersion effect and often also smaller than the influence of the underlying density functional.",
"title": ""
},
{
"docid": "9b451aa93627d7b44acc7150a1b7c2d0",
"text": "BACKGROUND\nAerobic endurance exercise has been shown to improve higher cognitive functions such as executive control in healthy subjects. We tested the hypothesis that a 30-minute individually customized endurance exercise program has the potential to enhance executive functions in patients with major depressive disorder.\n\n\nMETHOD\nIn a randomized within-subject study design, 24 patients with DSM-IV major depressive disorder and 10 healthy control subjects performed 30 minutes of aerobic endurance exercise at 2 different workload levels of 40% and 60% of their predetermined individual 4-mmol/L lactic acid exercise capacity. They were then tested with 4 standardized computerized neuropsychological paradigms measuring executive control functions: the task switch paradigm, flanker task, Stroop task, and GoNogo task. Performance was measured by reaction time. Data were gathered between fall 2000 and spring 2002.\n\n\nRESULTS\nWhile there were no significant exercise-dependent alterations in reaction time in the control group, for depressive patients we observed a significant decrease in mean reaction time for the congruent Stroop task condition at the 60% energy level (p = .016), for the incongruent Stroop task condition at the 40% energy level (p = .02), and for the GoNogo task at both energy levels (40%, p = .025; 60%, p = .048). The exercise procedures had no significant effect on reaction time in the task switch paradigm or the flanker task.\n\n\nCONCLUSION\nA single 30-minute aerobic endurance exercise program performed by depressed patients has positive effects on executive control processes that appear to be specifically subserved by the anterior cingulate.",
"title": ""
},
{
"docid": "864d1c5a2861acc317f9f2a37c6d3660",
"text": "We report a case of an 8-month-old child with a primitive myxoid mesenchymal tumor of infancy arising in the thenar eminence. The lesion recurred after conservative excision and was ultimately nonresponsive to chemotherapy, necessitating partial amputation. The patient remains free of disease 5 years after this radical surgery. This is the 1st report of such a tumor since it was initially described by Alaggio and colleagues in 2006. The pathologic differential diagnosis is discussed.",
"title": ""
},
{
"docid": "0109c8c7663df5e8ac2abd805924d9f6",
"text": "To ensure system stability and availability during disturbances, industrial facilities equipped with on-site generation, generally utilize some type of load shedding scheme. In recent years, conventional underfrequency and PLC-based load shedding schemes have been integrated with computerized power management systems to provide an “automated” load shedding system. However, these automated solutions lack system operating knowledge and are still best-guess methods which typically result in excessive or insufficient load shedding. An intelligent load shedding system can provide faster and optimal load relief by utilizing actual operating conditions and knowledge of past system disturbances. This paper presents the need for an intelligent, automated load shedding system. Simulation of case studies for two industrial electrical networks are performed to demonstrate the advantages of an intelligent load shedding system over conventional load shedding methods from the design and operation perspectives. Index Terms — Load Shedding (LS), Intelligent Load Shedding (ILS), Power System Transient Stability, Frequency Relay, Programmable Logic Controller (PLC), Power Management System",
"title": ""
},
{
"docid": "4b3592efd8a4f6f6c9361a6f66a30a5f",
"text": "Error correction codes provides a mean to detect and correct errors introduced by the transmission channel. This paper presents a high-speed parallel cyclic redundancy check (CRC) implementation based on unfolding, pipelining, and retiming algorithms. CRC architectures are first pipelined to reduce the iteration bound by using novel look-ahead pipelining methods and then unfolded and retimed to design high-speed parallel circuits. The study and implementation using Verilog HDL. Modelsim Xilinx Edition (MXE) will be used for simulation and functional verification. Xilinx ISE will be used for synthesis and bit file generation. The Xilinx Chip scope will be used to test the results on Spartan 3E",
"title": ""
},
{
"docid": "6bbfe0418e446d70eb0a17a14415c03c",
"text": "Most mobile apps today require access to remote services, and many of them also require users to be authenticated in order to use their services. To ensure the security between the client app and the remote service, app developers often use cryptographic mechanisms such as encryption (e.g., HTTPS), hashing (e.g., MD5, SHA1), and signing (e.g., HMAC) to ensure the confidentiality and integrity of the network messages. However, these cryptographic mechanisms can only protect the communication security, and server-side checks are still needed because malicious clients owned by attackers can generate any messages they wish. As a result, incorrect or missing server side checks can lead to severe security vulnerabilities including password brute-forcing, leaked password probing, and security access token hijacking. To demonstrate such a threat, we present AUTOFORGE, a tool that can automatically forge valid request messages from the client side to test whether the server side of an app has ensured the security of user accounts with sufficient checks. To enable these security tests, a fundamental challenge lies in how to forge a valid cryptographically consistent message such that it can be consumed by the server. We have addressed this challenge with a set of systematic techniques, and applied them to test the server side implementation of 76 popular mobile apps (each of which has over 1,000,000 installs). Our experimental results show that among these apps, 65 (86%) of their servers are vulnerable to password brute-forcing attacks, all (100%) are vulnerable to leaked password probing attacks, and 9 (12%) are vulnerable to Facebook access token hijacking attacks.",
"title": ""
},
{
"docid": "4d2bfda62140962af079817fc7dbd43e",
"text": "Online health communities and support groups are a valuable source of information for users suffering from a physical or mental illness. Users turn to these forums for moral support or advice on specific conditions, symptoms, or side effects of medications. This paper describes and studies the linguistic patterns of a community of support forum users over time focused on the used of anxious related words. We introduce a methodology to identify groups of individuals exhibiting linguistic patterns associated with anxiety and the correlations between this linguistic pattern and other word usage. We find some evidence that participation in these groups does yield positive effects on their users by reducing the frequency of anxious related word used over time.",
"title": ""
},
{
"docid": "ebc5ef2e1f557b723358043248c051a8",
"text": "Place similarity has a central role in geographic information retrieval and geographic information systems, where spatial proximity is frequently just a poor substitute for semantic relatedness. For applications such as toponym disambiguation, alternative measures are thus required to answer the non-trivial question of place similarity in a given context. In this paper, we discuss a novel approach to the construction of a network of locations from unstructured text data. By deriving similarity scores based on the textual distance of toponyms, we obtain a kind of relatedness that encodes the importance of the co-occurrences of place mentions. Based on the text of the English Wikipedia, we construct and provide such a network of place similarities, including entity linking to Wikidata as an augmentation of the contained information. In an analysis of centrality, we explore the networks capability of capturing the similarity between places. An evaluation of the network for the task of toponym disambiguation on the AIDA CoNLL-YAGO dataset reveals a performance that is in line with state-of-the-art methods.",
"title": ""
},
{
"docid": "63934b1fdc9b7c007302cdc41a42744e",
"text": "Recently, various energy harvesting techniques from ambient environments were proposed as alternative methods for powering sensor nodes, which convert the ambient energy from environments into electricity to power sensor nodes. However, those techniques are not applicable to the wireless sensor networks (WSNs) in the environment with no ambient energy source. To overcome this problem, an RF energy transfer method was proposed to power wireless sensor nodes. However, the RF energy transfer method also has a problem of unfairness among sensor nodes due to the significant difference between their energy harvesting rates according to their positions. In this paper, we propose a medium access control (MAC) protocol for WSNs based on RF energy transfer. The proposed MAC protocol adaptively manages the duty cycle of sensor nodes according to their the amount of harvested energy as well as the contention time of sensor nodes considering fairness among them. Through simulations, we show that our protocol can achieve a high degree of fairness, while maintaining duty cycle of sensor nodes appropriately according to the amount of their harvested energy.",
"title": ""
},
{
"docid": "90d9360a3e769311a8d7611d8c8845d9",
"text": "We introduce a learning-based approach to detect repeatable keypoints under drastic imaging changes of weather and lighting conditions to which state-of-the-art keypoint detectors are surprisingly sensitive. We first identify good keypoint candidates in multiple training images taken from the same viewpoint. We then train a regressor to predict a score map whose maxima are those points so that they can be found by simple non-maximum suppression. As there are no standard datasets to test the influence of these kinds of changes, we created our own, which we will make publicly available. We will show that our method significantly outperforms the state-of-the-art methods in such challenging conditions, while still achieving state-of-the-art performance on untrained standard datasets.",
"title": ""
},
{
"docid": "ba6f0206decf2b9bde415ffdfcd32eb9",
"text": "Broad host-range mini-Tn7 vectors facilitate integration of single-copy genes into bacterial chromosomes at a neutral, naturally evolved site. Here we present a protocol for employing the mini-Tn7 system in bacteria with single attTn7 sites, using the example Pseudomonas aeruginosa. The procedure involves, first, cloning of the genes of interest into an appropriate mini-Tn7 vector; second, co-transfer of the recombinant mini-Tn7 vector and a helper plasmid encoding the Tn7 site-specific transposition pathway into P. aeruginosa by either transformation or conjugation, followed by selection of insertion-containing strains; third, PCR verification of mini-Tn7 insertions; and last, optional Flp-mediated excision of the antibiotic-resistance selection marker present on the chromosomally integrated mini-Tn7 element. From start to verification of the insertion events, the procedure takes as little as 4 d and is very efficient, yielding several thousand transformants per microgram of input DNA or conjugation mixture. In contrast to existing chromosome integration systems, which are mostly based on species-specific phage or more-or-less randomly integrating transposons, the mini-Tn7 system is characterized by its ready adaptability to various bacterial hosts, its site specificity and its efficiency. Vectors have been developed for gene complementation, construction of gene fusions, regulated gene expression and reporter gene tagging.",
"title": ""
},
{
"docid": "ef24d571dc9a7a3fae2904b8674799e9",
"text": "With computers and the Internet being essential in everyday life, malware poses serious and evolving threats to their security, making the detection of malware of utmost concern. Accordingly, there have been many researches on intelligent malware detection by applying data mining and machine learning techniques. Though great results have been achieved with these methods, most of them are built on shallow learning architectures. Due to its superior ability in feature learning through multilayer deep architecture, deep learning is starting to be leveraged in industrial and academic research for different applications. In this paper, based on the Windows application programming interface calls extracted from the portable executable files, we study how a deep learning architecture can be designed for intelligent malware detection. We propose a heterogeneous deep learning framework composed of an AutoEncoder stacked up with multilayer restricted Boltzmann machines and a layer of associative memory to detect newly unknown malware. The proposed deep learning model performs as a greedy layer-wise training operation for unsupervised feature learning, followed by supervised parameter fine-tuning. Different from the existing works which only made use of the files with class labels (either malicious or benign) during the training phase, we utilize both labeled and unlabeled file samples to pre-train multiple layers in the heterogeneous deep learning framework from bottom to up for feature learning. A comprehensive experimental study on a real and large file collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that our proposed deep learning framework can further improve the overall performance in malware detection compared with traditional shallow learning methods, deep learning methods with homogeneous framework, and other existing anti-malware scanners. The proposed heterogeneous deep learning framework can also be readily applied to other malware detection tasks.",
"title": ""
}
] |
scidocsrr
|
9521d868f4608cb693bb6aa146debb8e
|
Exploring Gameplay Experiences on the Oculus Rift
|
[
{
"docid": "119dd2c7eb5533ece82cff7987f21dba",
"text": "Despite the word's common usage by gamers and reviewers alike, it is still not clear what immersion means. This paper explores immersion further by investigating whether immersion can be defined quantitatively, describing three experiments in total. The first experiment investigated participants' abilities to switch from an immersive to a non-immersive task. The second experiment investigated whether there were changes in participants' eye movements during an immersive task. The third experiment investigated the effect of an externally imposed pace of interaction on immersion and affective measures (state-anxiety, positive affect, negative affect). Overall the findings suggest that immersion can be measured subjectively (through questionnaires) as well as objectively (task completion time, eye movements). Furthermore, immersion is not only viewed as a positive experience: negative emotions and uneasiness (i.e. anxiety) also run high.",
"title": ""
}
] |
[
{
"docid": "1d50c8598a41ed7953e569116f59ae41",
"text": "Several web-based platforms have emerged to ease the development of interactive or near real-time IoT applications by providing a way to connect things and services together and process the data they emit using a data flow paradigm. While these platforms have been found to be useful on their own, many IoT scenarios require the coordination of computing resources across the network: on servers, gateways and devices themselves. To address this, we explore how to extend existing IoT data flow platforms to create a system suitable for execution on a range of run time environments, toward supporting distributed IoT programs that can be partitioned between servers, gateways and devices. Eventually we aim to automate the distribution of data flows using appropriate distribution mechanism, and optimization heuristics based on participating resource capabilities and constraints imposed by the developer.",
"title": ""
},
{
"docid": "3efef315b9b7fcf914dd763080804baf",
"text": "Performing correct anti-predator behaviour is crucial for prey to survive. But are such abilities lost in species or populations living in predator-free environments? How individuals respond to the loss of predators has been shown to depend on factors such as the degree to which anti-predator behaviour relies on experience, the type of cues evoking the behaviour, the cost of expressing the behaviour and the number of generations under which the relaxed selection has taken place. Here we investigated whether captive-born populations of meerkats (Suricata suricatta) used the same repertoire of alarm calls previously documented in wild populations and whether captive animals, as wild ones, could recognize potential predators through olfactory cues. We found that all alarm calls that have been documented in the wild also occurred in captivity and were given in broadly similar contexts. Furthermore, without prior experience of odours from predators, captive meerkats seemed to dist inguish between faeces of potential predators (carnivores) and non-predators (herbivores). Despite slight structural differences, the alarm calls given in response to the faeces largely resembled those recorded in similar contexts in the wild. These results from captive populations suggest that direct, physical interaction with predators is not necessary for meerkats to perform correct anti-predator behaviour in terms of alarm-call usage and olfactory predator recognition. Such behaviour may have been retained in captivity because relatively little experience seems necessary for correct performance in the wild and/or because of the recency of relaxed selection on these populations. DOI: https://doi.org/10.1111/j.1439-0310.2007.01409.x Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-282 Accepted Version Originally published at: Hollén, L I; Manser, M B (2007). Persistence of Alarm-Call Behaviour in the Absence of Predators: A Comparison Between Wild and Captive-Born Meerkats (Suricata suricatta). Ethology, 113(11):10381047. DOI: https://doi.org/10.1111/j.1439-0310.2007.01409.x",
"title": ""
},
{
"docid": "03cd4dcbda645c26fda3b8bce1bdb80a",
"text": "Nowadays, many organizations face the issue of information and communication technology (ICT) management. The organization undertakes various activities to assess the state of their ICT or the entire information system (IS), such as IS audits, reviews, IS due diligence, or they already have implemented systems to gather information regarding their IS. For the organizations’ boards and managers there is an issue how to evaluate current IS maturity level and based on assessments, define the IS strategy how to reach the maturity target level. The problem is, what kind of approach to use for rapid and effective IS maturity assessment. This paper summarizes the research regarding IS maturity within different organizations where the authors have delivered either complete IS due diligence or made partial analysis by IS Mirror method. The main objective of this research is to present and confirm the approach which could be used for effective IS maturity assessment and could be provided quickly and even remotely. The paper presented research question, related hypothesis, and approach for rapid IS maturity assessment, and results from several case studies made on-site or remotely.",
"title": ""
},
{
"docid": "ffb7b58d947aa15cd64efbadb0f9543d",
"text": "A multi-armed bandit is an experiment with the goal of accumulating rewards from a payoff distribution with unknown parameters that are to be learned sequentially. This article describes a heuristic for managing multi-armed bandits called randomized probability matching, which randomly allocates observations to arms according the Bayesian posterior probability that each arm is optimal. Advances in Bayesian computation have made randomized probability matching easy to apply to virtually any payoff distribution. This flexibility frees the experimenter to work with payoff distributions that correspond to certain classical experimental designs that have the potential to outperform methods that are ‘optimal’ in simpler contexts. I summarize the relationships between randomized probability matching and several related heuristics that have been used in the reinforcement learning literature. Copyright q 2010 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "bd121443b5a1dfb16687001c72b22199",
"text": "We review the nosological criteria and functional neuroanatomical basis for brain death, coma, vegetative state, minimally conscious state, and the locked-in state. Functional neuroimaging is providing new insights into cerebral activity in patients with severe brain damage. Measurements of cerebral metabolism and brain activations in response to sensory stimuli with PET, fMRI, and electrophysiological methods can provide information on the presence, degree, and location of any residual brain function. However, use of these techniques in people with severe brain damage is methodologically complex and needs careful quantitative analysis and interpretation. In addition, ethical frameworks to guide research in these patients must be further developed. At present, clinical examinations identify nosological distinctions needed for accurate diagnosis and prognosis. Neuroimaging techniques remain important tools for clinical research that will extend our understanding of the underlying mechanisms of these disorders.",
"title": ""
},
{
"docid": "b45bb513f7bd9de4941785490945d53e",
"text": "Recurrent Neural Networks (RNNs) have become the state-of-the-art choice for extracting patterns from temporal sequences. However, current RNN models are ill-suited to process irregularly sampled data triggered by events generated in continuous time by sensors or other neurons. Such data can occur, for example, when the input comes from novel event-driven artificial sensors that generate sparse, asynchronous streams of events or from multiple conventional sensors with different update intervals. In this work, we introduce the Phased LSTM model, which extends the LSTM unit by adding a new time gate. This gate is controlled by a parametrized oscillation with a frequency range that produces updates of the memory cell only during a small percentage of the cycle. Even with the sparse updates imposed by the oscillation, the Phased LSTM network achieves faster convergence than regular LSTMs on tasks which require learning of long sequences. The model naturally integrates inputs from sensors of arbitrary sampling rates, thereby opening new areas of investigation for processing asynchronous sensory events that carry timing information. It also greatly improves the performance of LSTMs in standard RNN applications, and does so with an order-of-magnitude fewer computes at runtime.",
"title": ""
},
{
"docid": "38d3dc6b5eb1dbf85b1a371b645a17da",
"text": "Energy costs for data centers continue to rise, already exceeding $15 billion yearly. Sadly much of this power is wasted. Servers are only busy 10--30% of the time on average, but they are often left on, while idle, utilizing 60% or more of peak power when in the idle state.\n We introduce a dynamic capacity management policy, AutoScale, that greatly reduces the number of servers needed in data centers driven by unpredictable, time-varying load, while meeting response time SLAs. AutoScale scales the data center capacity, adding or removing servers as needed. AutoScale has two key features: (i) it autonomically maintains just the right amount of spare capacity to handle bursts in the request rate; and (ii) it is robust not just to changes in the request rate of real-world traces, but also request size and server efficiency.\n We evaluate our dynamic capacity management approach via implementation on a 38-server multi-tier data center, serving a web site of the type seen in Facebook or Amazon, with a key-value store workload. We demonstrate that AutoScale vastly improves upon existing dynamic capacity management policies with respect to meeting SLAs and robustness.",
"title": ""
},
{
"docid": "984b9737cd2566ff7d18e6e2f9e5bed2",
"text": "Advances in anatomic understanding are frequently the basis upon which surgical techniques are advanced and refined. Recent anatomic studies of the superficial tissues of the face have led to an increased understanding of the compartmentalized nature of the subcutaneous fat. This report provides a review of the locations and characteristics of the facial fat compartments and provides examples of how this knowledge can be used clinically, specifically with regard to soft tissue fillers.",
"title": ""
},
{
"docid": "cb3ea08822b3e1648d497a3f472060a7",
"text": "In this paper, a linear time-varying model-based predictive controller (LTV-MPC) for lateral vehicle guidance by front steering is proposed. Due to the fact that this controller is designed in the scope of a Collision Avoidance System, it has to fulfill the requirement of an appropiate control performance at the limits of vehicle dynamics. To achieve this objective, the introduced approach employs estimations of the applied steering angle as well as the state variable trajectory to be used for successive linearization of the nonlinear prediction model over the prediction horizon. To evaluate the control performance, the proposed controller is compared to a LTV-MPC controller that uses linearizations of the nonlinear prediction model that remain unchanged over the prediction horizon. Simulation results show that an improved control performance can be achieved by the estimation based approach.",
"title": ""
},
{
"docid": "665f7b5de2ce617dd3009da3d208026f",
"text": "Throughout the history of the social evolution, man and animals come into frequent contact that forms an interdependent relationship between man and animals. The images of animals root in the everyday life of all nations, forming unique animal culture of each nation. Therefore, Chinese and English, as the two languages which spoken by the most people in the world, naturally contain a lot of words relating to animals, and because of different history and culture, the connotations of animal words in one language do not coincide with those in another. The clever use of animal words is by no means scarce in everyday communication or literary works, which helps make English and Chinese vivid and lively in image, plain and expressive in character, and rich and strong in flavor. In this study, many animal words are collected for the analysis of the similarities and the differences between the cultural connotations carried by animal words in Chinese and English, find out the causes of differences, and then discuss some methods and techniques for translating these animal words.",
"title": ""
},
{
"docid": "1a063741d53147eb6060a123bff96c27",
"text": "OBJECTIVE\nThe assessment of cognitive functions of adults with attention deficit hyperactivity disorder (ADHD) comprises self-ratings of cognitive functioning (subjective assessment) as well as psychometric testing (objective neuropsychological assessment). The aim of the present study was to explore the utility of these assessment strategies in predicting neuropsychological impairments of adults with ADHD as determined by both approaches.\n\n\nMETHOD\nFifty-five adults with ADHD and 66 healthy participants were assessed with regard to cognitive functioning in several domains by employing subjective and objective measurement tools. Significance and effect sizes for differences between groups as well as the proportion of patients with impairments were analyzed. Furthermore, logistic regression analyses were carried out in order to explore the validity of subjective and objective cognitive measures in predicting cognitive impairments.\n\n\nRESULTS\nBoth subjective and objective assessment tools revealed significant cognitive dysfunctions in adults with ADHD. The majority of patients displayed considerable impairments in all cognitive domains assessed. A comparison of effect sizes, however, showed larger dysfunctions in the subjective assessment than in the objective assessment. Furthermore, logistic regression models indicated that subjective cognitive complaints could not be predicted by objective measures of cognition and vice versa.\n\n\nCONCLUSIONS\nSubjective and objective assessment tools were found to be sensitive in revealing cognitive dysfunctions of adults with ADHD. Because of the weak association between subjective and objective measurements, it was concluded that subjective and objective measurements are both important for clinical practice but may provide distinct types of information and capture different aspects of functioning.",
"title": ""
},
{
"docid": "ce31bc3245c7f45311c65b4e8923cad0",
"text": "In this paper, we propose a probabilistic active tactile transfer learning (ATTL) method to enable robotic systems to exploit their prior tactile knowledge while discriminating among objects via their physical properties (surface texture, stiffness, and thermal conductivity). Using the proposed method, the robot autonomously selects and exploits its most relevant prior tactile knowledge to efficiently learn about new unknown objects with a few training samples or even one. The experimental results show that using our proposed method, the robot successfully discriminated among new objects with 72% discrimination accuracy using only one training sample (on-shot-tactile-learning). Furthermore, the results demonstrate that our method is robust against transferring irrelevant prior tactile knowledge (negative tactile knowledge transfer).",
"title": ""
},
{
"docid": "dbf3307ed73ec7c004b90e36f3f297c0",
"text": "Web 2.0 has had a tremendous impact on education. It facilitates access and availability of learning content in variety of new formats, content creation, learning tailored to students’ individual preferences, and collaboration. The range of Web 2.0 tools and features is constantly evolving, with focus on users and ways that enable users to socialize, share and work together on (user-generated) content. In this chapter we present ALEF – Adaptive Learning Framework that responds to the challenges posed on educational systems in Web 2.0 era. Besides its base functionality – to deliver educational content – ALEF particularly focuses on making the learning process more efficient by delivering tailored learning experience via personalized recommendation, and enabling learners to collaborate and actively participate in learning via interactive educational components. Our existing and successfully utilized solution serves as the medium for presenting key concepts that enable realizing Web 2.0 principles in education, namely lightweight models, and three components of framework infrastructure important for constant evolution and inclusion of students directly into the educational process – annotation framework, feedback infrastructure and widgets. These make possible to devise and implement various mechanisms for recommendation and collaboration – we also present selected methods for personalized recommendation and collaboration together with their evaluation in ALEF.",
"title": ""
},
{
"docid": "88be12fdd7ec90a7af7337f3d29b2130",
"text": "Classification of time series has been attracting great interest over the past decade. While dozens of techniques have been introduced, recent empirical evidence has strongly suggested that the simple nearest neighbor algorithm is very difficult to beat for most time series problems, especially for large-scale datasets. While this may be considered good news, given the simplicity of implementing the nearest neighbor algorithm, there are some negative consequences of this. First, the nearest neighbor algorithm requires storing and searching the entire dataset, resulting in a high time and space complexity that limits its applicability, especially on resource-limited sensors. Second, beyond mere classification accuracy, we often wish to gain some insight into the data and to make the classification result more explainable, which global characteristics of the nearest neighbor cannot provide. In this work we introduce a new time series primitive, time series shapelets, which addresses these limitations. Informally, shapelets are time series subsequences which are in some sense maximally representative of a class. We can use the distance to the shapelet, rather than the distance to the nearest neighbor to classify objects. As we shall show with extensive empirical evaluations in diverse domains, classification algorithms based on the time series shapelet primitives can be interpretable, more accurate, and significantly faster than state-of-the-art classifiers.",
"title": ""
},
{
"docid": "b640ed2bd02ba74ee0eb925ef6504372",
"text": "In the discussion about Future Internet, Software-Defined Networking (SDN), enabled by OpenFlow, is currently seen as one of the most promising paradigm. While the availability and scalability concerns rises as a single controller could be alleviated by using replicate or distributed controllers, there lacks a flexible mechanism to allow controller load balancing. This paper proposes BalanceFlow, a controller load balancing architecture for OpenFlow networks. By utilizing CONTROLLER X action extension for OpenFlow switches and cross-controller communication, one of the controllers, called “super controller”, can flexibly tune the flow-requests handled by each controller, without introducing unacceptable propagation latencies. Experiments based on real topology show that BalanceFlow can adjust the load of each controller dynamically.",
"title": ""
},
{
"docid": "ae5202304d6a485cbcc7cf3a0983fe0c",
"text": "Most of the algorithms for inverse reinforcement learning (IRL) assume that the reward function is a linear function of the pre-defined state and action features. However, it is often difficult to manually specify the set of features that can make the true reward function representable as a linear function. We propose a Bayesian nonparametric approach to identifying useful composite features for learning the reward function. The composite features are assumed to be the logical conjunctions of the predefined atomic features so that we can represent the reward function as a linear function of the composite features. We empirically show that our approach is able to learn composite features that capture important aspects of the reward function on synthetic domains, and predict taxi drivers’ behaviour with high accuracy on a real GPS trace dataset.",
"title": ""
},
{
"docid": "c7c74089c55965eb7734ab61ac0795a7",
"text": "Many researchers see the potential of wireless mobile learning devices to achieve large-scale impact on learning because of portability, low cost, and communications features. This enthusiasm is shared but the lessons drawn from three well-documented uses of connected handheld devices in education lead towards challenges ahead. First, ‘wireless, mobile learning’ is an imprecise description of what it takes to connect learners and their devices together in a productive manner. Research needs to arrive at a more precise understanding of the attributes of wireless networking that meet acclaimed pedagogical requirements and desires. Second, ‘pedagogical applications’ are often led down the wrong road by complex views of technology and simplistic views of social practices. Further research is needed that tells the story of rich pedagogical practice arising out of simple wireless and mobile technologies. Third, ‘large scale’ impact depends on the extent to which a common platform, that meets the requirements of pedagogically rich applications, becomes available. At the moment ‘wireless mobile technologies for education’ are incredibly diverse and incompatible; to achieve scale, a strong vision will be needed to lead to standardisation, overcoming the tendency to marketplace fragmentation.",
"title": ""
},
{
"docid": "77af12d87cd5827f35d92968d1888162",
"text": "Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.",
"title": ""
},
{
"docid": "f4d85ad52e37bd81058bfff830a52f0a",
"text": "A number of antioxidants and trace minerals have important roles in immune function and may affect health in transition dairy cows. Vitamin E and beta-carotene are important cellular antioxidants. Selenium (Se) is involved in the antioxidant system via its role in the enzyme glutathione peroxidase. Inadequate dietary vitamin E or Se decreases neutrophil function during the perpariturient period. Supplementation of vitamin E and/or Se has reduced the incidence of mastitis and retained placenta, and reduced duration of clinical symptoms of mastitis in some experiments. Research has indicated that beta-carotene supplementation may enhance immunity and reduce the incidence of retained placenta and metritis in dairy cows. Marginal copper deficiency resulted in reduced neutrophil killing and decreased interferon production by mononuclear cells. Copper supplementation of a diet marginal in copper reduced the peak clinical response during experimental Escherichia coli mastitis. Limited research indicated that chromium supplementation during the transition period may increase immunity and reduce the incidence of retained placenta.",
"title": ""
}
] |
scidocsrr
|
5b6c98a74df8ed0b2654ea84e478157a
|
Accelerating 5G QoE via public-private spectrum sharing
|
[
{
"docid": "7f4843fe3cab91688786d7ca1bfb19d9",
"text": "In this paper the possibility of designing an OFDM system for simultaneous radar and communications operations is discussed. A novel approach to OFDM radar processing is introduced that overcomes the typical drawbacks of correlation based processing. A suitable OFDM system parameterization for operation at 24 GHz is derived that fulfills the requirements for both applications. The operability of the proposed system concept is verified with MatLab simulations. Keywords-OFDM; Radar, Communications",
"title": ""
},
{
"docid": "e0583afbdc609792ad947223006c851f",
"text": "Orthogonal frequency-division multiplexing (OFDM) signal coding and system architecture were implemented to achieve radar and data communication functionalities. The resultant system is a software-defined unit, which can be used for range measurements, radar imaging, and data communications. Range reconstructions were performed for ranges up to 4 m using trihedral corner reflectors with approximately 203 m of radar cross section at the carrier frequency; range resolution of approximately 0.3 m was demonstrated. Synthetic aperture radar (SAR) image of a single corner reflector was obtained; SAR signal processing specific to OFDM signals is presented. Data communication tests were performed in radar setup, where the signal was reflected by the same target and decoded as communication data; bit error rate of was achieved at 57 Mb/s. The system shows good promise as a multifunctional software-defined sensor which can be used in radar sensor networks.",
"title": ""
}
] |
[
{
"docid": "46ba4ec0c448e01102d5adae0540ca0d",
"text": "An extensive literature shows that social relationships influence psychological well-being, but the underlying mechanisms remain unclear. We test predictions about online interactions and well-being made by theories of belongingness, relationship maintenance, relational investment, social support, and social comparison. An opt-in panel study of 1,910 Facebook users linked self-reported measures of well-being to counts of respondents’ Facebook activities from server logs. Specific uses of the site were associated with improvements in well-being: Receiving targeted, composed communication from strong ties was associated with improvements in wellbeing while viewing friends’ wide-audience broadcasts and receiving one-click feedback were not. These results suggest that people derive benefits from online communication, as long it comes from people they care about and has been tailored for them.",
"title": ""
},
{
"docid": "66e43ce62fd7e9cf78c4ff90b82afb8d",
"text": "BACKGROUND\nConcern over the frequency of unintended harm to patients has focused attention on the importance of teamwork and communication in avoiding errors. This has led to experiments with teamwork training programmes for clinical staff, mostly based on aviation models. These are widely assumed to be effective in improving patient safety, but the extent to which this assumption is justified by evidence remains unclear.\n\n\nMETHODS\nA systematic literature review on the effects of teamwork training for clinical staff was performed. Information was sought on outcomes including staff attitudes, teamwork skills, technical performance, efficiency and clinical outcomes.\n\n\nRESULTS\nOf 1036 relevant abstracts identified, 14 articles were analysed in detail: four randomized trials and ten non-randomized studies. Overall study quality was poor, with particular problems over blinding, subjective measures and Hawthorne effects. Few studies reported on every outcome category. Most reported improved staff attitudes, and six of eight reported significantly better teamwork after training. Five of eight studies reported improved technical performance, improved efficiency or reduced errors. Three studies reported evidence of clinical benefit, but this was modest or of borderline significance in each case. Studies with a stronger intervention were more likely to report benefits than those providing less training. None of the randomized trials found evidence of technical or clinical benefit.\n\n\nCONCLUSION\nThe evidence for technical or clinical benefit from teamwork training in medicine is weak. There is some evidence of benefit from studies with more intensive training programmes, but better quality research and cost-benefit analysis are needed.",
"title": ""
},
{
"docid": "efdbbbea608b711e00bc9e6b6468e821",
"text": "We reinterpret multiplicative noise in neural networks as auxiliary random variables that augment the approximate posterior in a variational setting for Bayesian neural networks. We show that through this interpretation it is both efficient and straightforward to improve the approximation by employing normalizing flows (Rezende & Mohamed, 2015) while still allowing for local reparametrizations (Kingma et al., 2015) and a tractable lower bound (Ranganath et al., 2015; Maaløe et al., 2016). In experiments we show that with this new approximation we can significantly improve upon classical mean field for Bayesian neural networks on both predictive accuracy as well as predictive uncertainty.",
"title": ""
},
{
"docid": "b9adc9e23eadd1b99ad13330a9cf8c9f",
"text": "BACKGROUND\nPancreatic stellate cells (PSCs), a major component of the tumor microenvironment in pancreatic cancer, play roles in cancer progression as well as drug resistance. Culturing various cells in microfluidic (microchannel) devices has proven to be a useful in studying cellular interactions and drug sensitivity. Here we present a microchannel plate-based co-culture model that integrates tumor spheroids with PSCs in a three-dimensional (3D) collagen matrix to mimic the tumor microenvironment in vivo by recapitulating epithelial-mesenchymal transition and chemoresistance.\n\n\nMETHODS\nA 7-channel microchannel plate was prepared using poly-dimethylsiloxane (PDMS) via soft lithography. PANC-1, a human pancreatic cancer cell line, and PSCs, each within a designated channel of the microchannel plate, were cultured embedded in type I collagen. Expression of EMT-related markers and factors was analyzed using immunofluorescent staining or Proteome analysis. Changes in viability following exposure to gemcitabine and paclitaxel were measured using Live/Dead assay.\n\n\nRESULTS\nPANC-1 cells formed 3D tumor spheroids within 5 days and the number of spheroids increased when co-cultured with PSCs. Culture conditions were optimized for PANC-1 cells and PSCs, and their appropriate interaction was confirmed by reciprocal activation shown as increased cell motility. PSCs under co-culture showed an increased expression of α-SMA. Expression of EMT-related markers, such as vimentin and TGF-β, was higher in co-cultured PANC-1 spheroids compared to that in mono-cultured spheroids; as was the expression of many other EMT-related factors including TIMP1 and IL-8. Following gemcitabine exposure, no significant changes in survival were observed. When paclitaxel was combined with gemcitabine, a growth inhibitory advantage was prominent in tumor spheroids, which was accompanied by significant cytotoxicity in PSCs.\n\n\nCONCLUSIONS\nWe demonstrated that cancer cells grown as tumor spheroids in a 3D collagen matrix and PSCs co-cultured in sub-millimeter proximity participate in mutual interactions that induce EMT and drug resistance in a microchannel plate. Microfluidic co-culture of pancreatic tumor spheroids with PSCs may serve as a useful model for studying EMT and drug resistance in a clinically relevant manner.",
"title": ""
},
{
"docid": "9f2d6c872761d8922cac8a3f30b4b7ba",
"text": "Recently, CNN reported on the future of brain-computer interfaces (BCIs). BCIs are devices that process a user's brain signals to allow direct communication and interaction with the environment. BCIs bypass the normal neuromuscular output pathways and rely on digital signal processing and machine learning to translate brain signals to action (Figure 1). Historically, BCIs were developed with biomedical applications in mind, such as restoring communication in completely paralyzed individuals and replacing lost motor function. More recent applications have targeted nondisabled individuals by exploring the use of BCIs as a novel input device for entertainment and gaming. The task of the BCI is to identify and predict behaviorally induced changes or \"cognitive states\" in a user's brain signals. Brain signals are recorded either noninvasively from electrodes placed on the scalp [electroencephalogram (EEG)] or invasively from electrodes placed on the surface of or inside the brain. BCIs based on these recording techniques have allowed healthy and disabled individuals to control a variety of devices. In this article, we will describe different challenges and proposed solutions for noninvasive brain-computer interfacing.",
"title": ""
},
{
"docid": "e624a94c8440c1a8f318f5a56c353632",
"text": "“Neural coding” is a popular metaphor in neuroscience, where objective properties of the world are communicated to the brain in the form of spikes. Here I argue that this metaphor is often inappropriate and misleading. First, when neurons are said to encode experimental parameters, the implied communication channel consists of both the experimental and biological system. Thus, the terms “neural code” are used inappropriately when “neuroexperimental code” would be more accurate, although less insightful. Second, the brain cannot be presumed to decode neural messages into objective properties of the world, since it never gets to observe those properties. To avoid dualism, codes must relate not to external properties but to internal sensorimotor models. Because this requires structured representations, neural assemblies cannot be the basis of such codes. Third, a message is informative to the extent that the reader understands its language. But the neural code is private to the encoder since only the message is communicated: each neuron speaks its own language. It follows that in the neural coding metaphor, the brain is a Tower of Babel. Finally, the relation between input signals and actions is circular; that inputs do not preexist to outputs makes the coding paradigm problematic. I conclude that the view that spikes are messages is generally not tenable. An alternative proposition is that action potentials are actions on other neurons and the environment, and neurons interact with each other rather than exchange messages. . CC-BY 4.0 International license not peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was . http://dx.doi.org/10.1101/168237 doi: bioRxiv preprint first posted online Jul. 27, 2017;",
"title": ""
},
{
"docid": "1d15eb4e1a4296829ea9af62923a3fae",
"text": "We present a novel adaptation technique for search engines to better support information-seeking activities that include both lookup and exploratory tasks. Building on previous findings, we describe (1) a classifier that recognizes task type (lookup vs. exploratory) as a user is searching and (2) a reinforcement learning based search engine that adapts accordingly the balance of exploration/exploitation in ranking the documents. This allows supporting both task types surreptitiously without changing the familiar list-based interface. Search results include more diverse results when users are exploring and more precise results for lookup tasks. Users found more useful results in exploratory tasks when compared to a base-line system, which is specifically tuned for lookup tasks.",
"title": ""
},
{
"docid": "33e5a3619a6f7d831146c399ff55f5ff",
"text": "With the continuous development of online learning platforms, educational data analytics and prediction have become a promising research field, which are helpful for the development of personalized learning system. However, the indicator's selection process does not combine with the whole learning process, which may affect the accuracy of prediction results. In this paper, we induce 19 behavior indicators in the online learning platform, proposing a student performance prediction model which combines with the whole learning process. The model consists of four parts: data collection and pre-processing, learning behavior analytics, algorithm model building and prediction. Moreover, we apply an optimized Logistic Regression algorithm, taking a case to analyze students' behavior and to predict their performance. Experimental results demonstrate that these eigenvalues can effectively predict whether a student was probably to have an excellent grade.",
"title": ""
},
{
"docid": "a5776d4da32a93c69b18c696c717e634",
"text": "Optical flow computation is a key component in many computer vision systems designed for tasks such as action detection or activity recognition. However, despite several major advances over the last decade, handling large displacement in optical flow remains an open problem. Inspired by the large displacement optical flow of Brox and Malik, our approach, termed Deep Flow, blends a matching algorithm with a variational approach for optical flow. We propose a descriptor matching algorithm, tailored to the optical flow problem, that allows to boost performance on fast motions. The matching algorithm builds upon a multi-stage architecture with 6 layers, interleaving convolutions and max-pooling, a construction akin to deep convolutional nets. Using dense sampling, it allows to efficiently retrieve quasi-dense correspondences, and enjoys a built-in smoothing effect on descriptors matches, a valuable asset for integration into an energy minimization framework for optical flow estimation. Deep Flow efficiently handles large displacements occurring in realistic videos, and shows competitive performance on optical flow benchmarks. Furthermore, it sets a new state-of-the-art on the MPI-Sintel dataset.",
"title": ""
},
{
"docid": "f5deca0eaba55d34af78df82eda27bc2",
"text": "Sparse reward is one of the most challenging problems in reinforcement learning (RL). Hindsight Experience Replay (HER) attempts to address this issue by converting a failure experience to a successful one by relabeling the goals. Despite its effectiveness, HER has limited applicability because it lacks a compact and universal goal representation. We present Augmenting experienCe via TeacheR’s adviCE (ACTRCE), an efficient reinforcement learning technique that extends the HER framework using natural language as the goal representation. We first analyze the differences among goal representation, and show that ACTRCE can efficiently solve difficult reinforcement learning problems in challenging 3D navigation tasks, whereas HER with non-language goal representation failed to learn. We also show that with language goal representations, the agent can generalize to unseen instructions, and even generalize to instructions with unseen lexicons. We further demonstrate it is crucial to use hindsight advice to solve challenging tasks, but we also found that little amount of hindsight advice is sufficient for the learning to take off, showing the practical aspect of the method.",
"title": ""
},
{
"docid": "06b43b63aafbb70de2601b59d7813576",
"text": "Facial expression recognizers based on handcrafted features have achieved satisfactory performance on many databases. Recently, deep neural networks, e. g. deep convolutional neural networks (CNNs) have been shown to boost performance on vision tasks. However, the mechanisms exploited by CNNs are not well established. In this paper, we establish the existence and utility of feature maps selective to action units in a deep CNN trained by transfer learning. We transfer a network pre-trained on the Image-Net dataset to the facial expression recognition task using the Karolinska Directed Emotional Faces (KDEF), Radboud Faces Database(RaFD) and extended Cohn-Kanade (CK+) database. We demonstrate that higher convolutional layers of the deep CNN trained on generic images are selective to facial action units. We also show that feature selection is critical in achieving robustness, with action unit selective feature maps being more critical in the facial expression recognition task. These results support the hypothesis that both human and deeply learned CNNs use similar mechanisms for recognizing facial expressions.",
"title": ""
},
{
"docid": "b47127a755d7bef1c5baf89253af46e7",
"text": "In an effort to explain pro-environmental behavior, environmental sociologists often study environmental attitudes. While much of this work is atheoretical, the focus on attitudes suggests that researchers are implicitly drawing upon attitude theory in psychology. The present research brings sociological theory to environmental sociology by drawing on identity theory to understand environmentally responsive behavior. We develop an environment identity model of environmental behavior that includes not only the meanings of the environment identity, but also the prominence and salience of the environment identity and commitment to the environment identity. We examine the identity process as it relates to behavior, though not to the exclusion of examining the effects of environmental attitudes. The findings reveal that individual agency is important in influencing environmentally responsive behavior, but this agency is largely through identity processes, rather than attitude processes. This provides an important theoretical and empirical advance over earlier work in environmental sociology.",
"title": ""
},
{
"docid": "6a2fa5998bf51eb40c1fd2d8f3dd8277",
"text": "In this paper, we propose a new descriptor for texture classification that is robust to image blurring. The descriptor utilizes phase information computed locally in a window for every image position. The phases of the four low-frequency coefficients are decorrelated and uniformly quantized in an eight-dimensional space. A histogram of the resulting code words is created and used as a feature in texture classification. Ideally, the low-frequency phase components are shown to be invariant to centrally symmetric blur. Although this ideal invariance is not completely achieved due to the finite window size, the method is still highly insensitive to blur. Because only phase information is used, the method is also invariant to uniform illumination changes. According to our experiments, the classification accuracy of blurred texture images is much higher with the new method than with the well-known LBP or Gabor filter bank methods. Interestingly, it is also slightly better for textures that are not blurred.",
"title": ""
},
{
"docid": "625741e64d35ccef11fb2679fe36384c",
"text": "Wearable technology comprises miniaturized sensors (eg, accelerometers) worn on the body and/or paired with mobile devices (eg, smart phones) allowing continuous patient monitoring in unsupervised, habitual environments (termed free-living). Wearable technologies are revolutionizing approaches to health care as a result of their utility, accessibility, and affordability. They are positioned to transform Parkinson's disease (PD) management through the provision of individualized, comprehensive, and representative data. This is particularly relevant in PD where symptoms are often triggered by task and free-living environmental challenges that cannot be replicated with sufficient veracity elsewhere. This review concerns use of wearable technology in free-living environments for people with PD. It outlines the potential advantages of wearable technologies and evidence for these to accurately detect and measure clinically relevant features including motor symptoms, falls risk, freezing of gait, gait, functional mobility, and physical activity. Technological limitations and challenges are highlighted, and advances concerning broader aspects are discussed. Recommendations to overcome key challenges are made. To date there is no fully validated system to monitor clinical features or activities in free-living environments. Robust accuracy and validity metrics for some features have been reported, and wearable technology may be used in these cases with a degree of confidence. Utility and acceptability appears reasonable, although testing has largely been informal. Key recommendations include adopting a multidisciplinary approach for standardizing definitions, protocols, and outcomes. Robust validation of developed algorithms and sensor-based metrics is required along with testing of utility. These advances are required before widespread clinical adoption of wearable technology can be realized. © 2016 International Parkinson and Movement Disorder Society.",
"title": ""
},
{
"docid": "f3b1e1c9effb7828a62187e9eec5fba7",
"text": "Histone modifications and chromatin-associated protein complexes are crucially involved in the control of gene expression, supervising cell fate decisions and differentiation. Many promoters in embryonic stem (ES) cells harbor a distinctive histone modification signature that combines the activating histone H3 Lys 4 trimethylation (H3K4me3) mark and the repressive H3K27me3 mark. These bivalent domains are considered to poise expression of developmental genes, allowing timely activation while maintaining repression in the absence of differentiation signals. Recent advances shed light on the establishment and function of bivalent domains; however, their role in development remains controversial, not least because suitable genetic models to probe their function in developing organisms are missing. Here, we explore avenues to and from bivalency and propose that bivalent domains and associated chromatin-modifying complexes safeguard proper and robust differentiation.",
"title": ""
},
{
"docid": "0d292d5c1875845408c2582c182a6eb9",
"text": "Partial Least Squares (PLS) is a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises of regression and classification tasks as well as dimension reduction techniques and modeling tools. The underlying assumption of all PLS methods is that the observed data is generated by a system or process which is driven by a small number of latent (not directly observed or measured) variables. Projections of the observed data to its latent structure by means of PLS was developed by Herman Wold and coworkers [48, 49, 52]. PLS has received a great amount of attention in the field of chemometrics. The algorithm has become a standard tool for processing a wide spectrum of chemical data problems. The success of PLS in chemometrics resulted in a lot of applications in other scientific areas including bioinformatics, food research, medicine, pharmacology, social sciences, physiology–to name but a few [28, 25, 53, 29, 18, 22]. This chapter introduces the main concepts of PLS and provides an overview of its application to different data analysis problems. Our aim is to present a concise introduction, that is, a valuable guide for anyone who is concerned with data analysis. In its general form PLS creates orthogonal score vectors (also called latent vectors or components) by maximising the covariance between different sets of variables. PLS dealing with two blocks of variables is considered in this chapter, although the PLS extensions to model relations among a higher number of sets exist [44, 46, 47, 48, 39]. PLS is similar to Canonical Correlation Analysis (CCA) where latent vectors with maximal correlation are extracted [24]. There are different PLS techniques to extract latent vectors, and each of them gives rise to a variant of PLS. PLS can be naturally extended to regression problems. The predictor and predicted (response) variables are each considered as a block of variables. PLS then extracts the score vectors which serve as a new predictor representation",
"title": ""
},
{
"docid": "0af5fae1c9ee71cb7058b025883488a0",
"text": "Ciphertext Policy Attribute-Based Encryption (CP-ABE) enforces expressive data access policies and each policy consists of a number of attributes. Most existing CP-ABE schemes incur a very large ciphertext size, which increases linearly with respect to the number of attributes in the access policy. Recently, Herranz proposed a construction of CP-ABE with constant ciphertext. However, Herranz do not consider the recipients' anonymity and the access policies are exposed to potential malicious attackers. On the other hand, existing privacy preserving schemes protect the anonymity but require bulky, linearly increasing ciphertext size. In this paper, we proposed a new construction of CP-ABE, named Privacy Preserving Constant CP-ABE (denoted as PP-CP-ABE) that significantly reduces the ciphertext to a constant size with any given number of attributes. Furthermore, PP-CP-ABE leverages a hidden policy construction such that the recipients' privacy is preserved efficiently. As far as we know, PP-CP-ABE is the first construction with such properties. Furthermore, we developed a Privacy Preserving Attribute-Based Broadcast Encryption (PP-AB-BE) scheme. Compared to existing Broadcast Encryption (BE) schemes, PP-AB-BE is more flexible because a broadcasted message can be encrypted by an expressive hidden access policy, either with or without explicit specifying the receivers. Moreover, PP-AB-BE significantly reduces the storage and communication overhead to the order of O(log N), where N is the system size. Also, we proved, using information theoretical approaches, PP-AB-BE attains minimal bound on storage overhead for each user to cover all possible subgroups in the communication system.",
"title": ""
},
{
"docid": "bcb615f8bfe9b2b13a4bfe72b698e4c7",
"text": "is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. PARE has the right to authorize third party reproduction of this article in print, electronic and database forms. Researchers occasionally have to work with an extremely small sample size, defined herein as N ≤ 5. Some methodologists have cautioned against using the t-test when the sample size is extremely small, whereas others have suggested that using the t-test is feasible in such a case. The present simulation study estimated the Type I error rate and statistical power of the one-and two-sample t-tests for normally distributed populations and for various distortions such as unequal sample sizes, unequal variances, the combination of unequal sample sizes and unequal variances, and a lognormal population distribution. Ns per group were varied between 2 and 5. Results show that the t-test provides Type I error rates close to the 5% nominal value in most of the cases, and that acceptable power (i.e., 80%) is reached only if the effect size is very large. This study also investigated the behavior of the Welch test and a rank-transformation prior to conducting the t-test (t-testR). Compared to the regular t-test, the Welch test tends to reduce statistical power and the t-testR yields false positive rates that deviate from 5%. This study further shows that a paired t-test is feasible with extremely small Ns if the within-pair correlation is high. It is concluded that there are no principal objections to using a t-test with Ns as small as 2. A final cautionary note is made on the credibility of research findings when sample sizes are small. The dictum \" more is better \" certainly applies to statistical inference. According to the law of large numbers, a larger sample size implies that confidence intervals are narrower and that more reliable conclusions can be reached. The reality is that researchers are usually far from the ideal \" mega-trial \" performed with 10,000 subjects (cf. Ioannidis, 2013) and will have to work with much smaller samples instead. For a variety of reasons, such as budget, time, or ethical constraints, it may not be possible to gather a large sample. In some fields of science, such as research on rare animal species, persons having a rare illness, or prodigies scoring at the extreme of an ability distribution (e.g., Ruthsatz & Urbach, 2012), …",
"title": ""
},
{
"docid": "4cd612ca54df72f3174ebaab26d12d9a",
"text": "Multiple, often conflicting objectives arise naturally in most real-world optimization scenarios. As evolutionary algorithms possess several characteristics that are desirable for this type of problem, this class of search strategies has been used for multiobjective optimization for more than a decade. Meanwhile evolutionary multiobjective optimization has become established as a separate subdiscipline combining the fields of evolutionary computation and classical multiple criteria decision making. This paper gives an overview of evolutionary multiobjective optimization with the focus on methods and theory. On the one hand, basic principles of multiobjective optimization and evolutionary algorithms are presented, and various algorithmic concepts such as fitness assignment, diversity preservation, and elitism are discussed. On the other hand, the tutorial includes some recent theoretical results on the performance of multiobjective evolutionary algorithms and addresses the question of how to simplify the exchange of methods and applications by means of a standardized interface.",
"title": ""
}
] |
scidocsrr
|
459bab70548948693bc57bce36cd0c0c
|
Credit Card Fraud Detection using Big Data Analytics: Use of PSOAANN based One-Class Classification
|
[
{
"docid": "c2df8cc7775bd4ec2bfdf4498d136c9f",
"text": "Particle Swarm Optimization is a popular heuristic search algorithm which is inspired by the social learning of birds or fishes. It is a swarm intelligence technique for optimization developed by Eberhart and Kennedy [1] in 1995. Inertia weight is an important parameter in PSO, which significantly affects the convergence and exploration-exploitation trade-off in PSO process. Since inception of Inertia Weight in PSO, a large number of variations of Inertia Weight strategy have been proposed. In order to propose one or more than one Inertia Weight strategies which are efficient than others, this paper studies 15 relatively recent and popular Inertia Weight strategies and compares their performance on 05 optimization test problems.",
"title": ""
},
{
"docid": "fc12de539644b1b2cf2406708e13e1e0",
"text": "Nonlinear principal component analysis is a novel technique for multivariate data analysis, similar to the well-known method of principal component analysis. NLPCA, like PCA, is used to identify and remove correlations among problem variables as an aid to dimensionality reduction, visualization, and exploratory data analysis. While PCA identifies only linear correlations between variables, NLPCA uncovers both linear and nonlinear correlations, without restriction on the character of the nonlinearities present in the data. NLPCA operates by training a feedforward neural network to perform the identity mapping, where the network inputs are reproduced at the output layer. The network contains an internal “bottleneck” layer (containing fewer nodes than input or output layers), which forces the network to develop a compact representation of the input data, and two additional hidden layers. The NLPCA method is demonstrated using time-dependent, simulated batch reaction data. Results show that NLPCA successfully reduces dimensionality and produces a feature space map resembling the actual distribution of the underlying system parameters.",
"title": ""
}
] |
[
{
"docid": "6844473b57606198066406c540f642a4",
"text": "Physical activity (PA) during physical education is important for health purposes and for developing physical fitness and movement skills. To examine PA levels and how PA was influenced by environmental and instructor-related characteristics, we assessed children’s activity during 368 lessons taught by 105 physical education specialists in 42 randomly selected schools in Hong Kong. Trained observers used SOFIT in randomly selected classes, grades 4–6, during three climatic seasons. Results indicated children’s PA levels met the U.S. Healthy People 2010 objective of 50% engagement time and were higher than comparable U.S. populations. Multiple regression analyses revealed that temperature, teacher behavior, and two lesson characteristics (subject matter and mode of delivery) were significantly associated with the PA levels. Most of these factors are modifiable, and changes could improve the quantity and intensity of children’s PA.",
"title": ""
},
{
"docid": "38c78be386aa3827f39825f9e40aa3cc",
"text": "Back Side Illumination (BSI) CMOS image sensors with two-layer photo detectors (2LPDs) have been fabricated and evaluated. The test pixel array has green pixels (2.2um x 2.2um) and a magenta pixel (2.2um x 4.4um). The green pixel has a single-layer photo detector (1LPD). The magenta pixel has a 2LPD and a vertical charge transfer (VCT) path to contact a back side photo detector. The 2LPD and the VCT were implemented by high-energy ion implantation from the circuit side. Measured spectral response curves from the 2LPDs fitted well with those estimated based on lightabsorption theory for Silicon detectors. Our measurement results show that the keys to realize the 2LPD in BSI are; (1) the reduction of crosstalk to the VCT from adjacent pixels and (2) controlling the backside photo detector thickness variance to reduce color signal variations.",
"title": ""
},
{
"docid": "ee6612fa13482f7e3bbc7241b9e22297",
"text": "The MOND limit is shown to follow from a requirement of space-time scale invariance of the equations of motion for nonrelativistic, purely gravitational systems; i.e., invariance of the equations of motion under (t, r) → (λt, λr) in the limit a 0 → ∞. It is suggested that this should replace the definition of the MOND limit based on the asymptotic behavior of a Newtonian-MOND interpolating function. In this way, the salient, deep-MOND results–asymptotically flat rotation curves, the mass-rotational-speed relation (baryonic Tully-Fisher relation), the Faber-Jackson relation, etc.–follow from a symmetry principle. For example, asymptotic flatness of rotation curves reflects the fact that radii change under scaling, while velocities do not. I then comment on the interpretation of the deep-MOND limit as one of \" zero mass \" : Rest masses, whose presence obstructs scaling symmetry, become negligible compared to the \" phantom \" , dynamical masses–those that some would attribute to dark matter. Unlike the former masses, the latter transform in a way that is consistent with the symmetry. Finally, I discuss the putative MOND-cosmology connection, in particular the possibility that MOND-especially the deep-MOND limit– is related to the asymptotic de Sitter geometry of our universe. I point out, in this connection, the possible relevance of a (classical) de Sitter-conformal-field-theory (dS/CFT) correspondence.",
"title": ""
},
{
"docid": "8994470e355b5db188090be731ee4fe9",
"text": "A system that allows museums to build and manage Virtual and Augmented Reality exhibitions based on 3D models of artifacts is presented. Dynamic content creation based on pre-designed visualization templates allows content designers to create virtual exhibitions very efficiently. Virtual Reality exhibitions can be presented both inside museums, e.g. on touch-screen displays installed inside galleries and, at the same time, on the Internet. Additionally, the presentation based on Augmented Reality technologies allows museum visitors to interact with the content in an intuitive and exciting manner.",
"title": ""
},
{
"docid": "fa0f3b4cf096260a1c9a69541117431e",
"text": "Existing techniques for interactive rendering of deformable translucent objects can accurately compute diffuse but not directional subsurface scattering effects. It is currently a common practice to gain efficiency by storing maps of transmitted irradiance. This is, however, not efficient if we need to store elements of irradiance from specific directions. To include changes in subsurface scattering due to changes in the direction of the incident light, we instead sample incident radiance and store scattered radiosity. This enables us to accommodate not only the common distance-based analytical models for subsurface scattering but also directional models. In addition, our method enables easy extraction of virtual point lights for transporting emergent light to the rest of the scene. Our method requires neither preprocessing nor texture parameterization of the translucent objects. To build our maps of scattered radiosity, we progressively render the model from different directions using an importance sampling pattern based on the optical properties of the material. We obtain interactive frame rates, our subsurface scattering results are close to ground truth, and our technique is the first to include interactive transport of emergent light from deformable translucent objects.",
"title": ""
},
{
"docid": "5af801ca029fa3a0517ef9d32e7baab0",
"text": "Gender is one of the most common attributes used to describe an individual. It is used in multiple domains such as human computer interaction, marketing, security, and demographic reports. Research has been performed to automate the task of gender recognition in constrained environment using face images, however, limited attention has been given to gender classification in unconstrained scenarios. This work attempts to address the challenging problem of gender classification in multi-spectral low resolution face images. We propose a robust Class Representative Autoencoder model, termed as AutoGen for the same. The proposed model aims to minimize the intra-class variations while maximizing the inter-class variations for the learned feature representations. Results on visible as well as near infrared spectrum data for different resolutions and multiple databases depict the efficacy of the proposed model. Comparative results with existing approaches and two commercial off-the-shelf systems further motivate the use of class representative features for classification.",
"title": ""
},
{
"docid": "5dec9381369e61c30112bd87a044cb2f",
"text": "A limiting factor for the application of IDA methods in many domains is the incompleteness of data repositories. Many records have fields that are not filled in, especially, when data entry is manual. In addition, a significant fraction of the entries can be erroneous and there may be no alternative but to discard these records. But every cell in a database is not an independent datum. Statistical relationships will constrain and, often determine, missing values. Data imputation, the filling in of missing values for partially missing data, can thus be an invaluable first step in many IDA projects. New imputation methods that can handle the large-scale problems and large-scale sparsity of industrial databases are needed. To illustrate the incomplete database problem, we analyze one database with instrumentation maintenance and test records for an industrial process. Despite regulatory requirements for process data collection, this database is less than 50% complete. Next, we discuss possible solutions to the missing data problem. Several approaches to imputation are noted and classified into two categories: data-driven and model-based. We then describe two machine-learning-based approaches that we have worked with. These build upon well-known algorithms: AutoClass and C4.5. Several experiments are designed, all using the maintenance database as a common test-bed but with various data splits and algorithmic variations. Results are generally positive with up to 80% accuracies of imputation. We conclude the paper by outlining some considerations in selecting imputation methods, and by discussing applications of data imputation for intelligent data analysis.",
"title": ""
},
{
"docid": "b3bab7639acde03cbe12253ebc6eba31",
"text": "Autism spectrum disorder (ASD) is a wide-ranging collection of developmental diseases with varying symptoms and degrees of disability. Currently, ASD is diagnosed mainly with psychometric tools, often unable to provide an early and reliable diagnosis. Recently, biochemical methods are being explored as a means to meet the latter need. For example, an increased predisposition to ASD has been associated with abnormalities of metabolites in folate-dependent one carbon metabolism (FOCM) and transsulfuration (TS). Multiple metabolites in the FOCM/TS pathways have been measured, and statistical analysis tools employed to identify certain metabolites that are closely related to ASD. The prime difficulty in such biochemical studies comes from (i) inefficient determination of which metabolites are most important and (ii) understanding how these metabolites are collectively related to ASD. This paper presents a new method based on scores produced in Support Vector Machine (SVM) modeling combined with High Dimensional Model Representation (HDMR) sensitivity analysis. The new method effectively and efficiently identifies the key causative metabolites in FOCM/TS pathways, ranks their importance, and discovers their independent and correlative action patterns upon ASD. Such information is valuable not only for providing a foundation for a pathological interpretation but also for potentially providing an early, reliable diagnosis ideally leading to a subsequent comprehensive treatment of ASD. With only tens of SVM model runs, the new method can identify the combinations of the most important metabolites in the FOCM/TS pathways that lead to ASD. Previous efforts to find these metabolites required hundreds of thousands of model runs with the same data.",
"title": ""
},
{
"docid": "31e955e62361b6857b31d09398760830",
"text": "Measuring “how much the human is in the interaction” - the level of engagement - is instrumental in building effective interactive robots. Engagement, however, is a complex, multi-faceted cognitive mechanism that is only indirectly observable. This article formalizes with-me-ness as one of such indirect measures. With-me-ness, a concept borrowed from the field of Computer-Supported Collaborative Learning, measures in a well-defined way to what extent the human is with the robot over the course of an interactive task. As such, it is a meaningful precursor of engagement. We expose in this paper the full methodology, from real-time estimation of the human's focus of attention (relying on a novel, open-source, vision-based head pose estimator), to on-line computation of with-me-ness. We report as well on the experimental validation of this approach, using a naturalistic setup involving children during a complex robot-teaching task.",
"title": ""
},
{
"docid": "43fec39ff1d77fbe246e31d98f33f861",
"text": "OBJECTIVE\nTo investigate a modulation of the N170 face-sensitive component related to the perception of other-race (OR) and same-race (SR) faces, as well as differences in face and non-face object processing, by combining different methods of event-related potential (ERP) signal analysis.\n\n\nMETHODS\nSixty-two channel ERPs were recorded in 12 Caucasian subjects presented with Caucasian and Asian faces along with non-face objects. Surface data were submitted to classical waveforms and ERP map topography analysis. Underlying brain sources were estimated with two inverse solutions (BESA and LORETA).\n\n\nRESULTS\nThe N170 face component was identical for both race faces. This component and its topography revealed a face specific pattern regardless of race. However, in this time period OR faces evoked significantly stronger medial occipital activity than SR faces. Moreover, in terms of maps, at around 170 ms face-specific activity significantly preceded non-face object activity by 25 ms. These ERP maps were followed by similar activation patterns across conditions around 190-300 ms, most likely reflecting the activation of visually derived semantic information.\n\n\nCONCLUSIONS\nThe N170 was not sensitive to the race of the faces. However, a possible pre-attentive process associated to the relatively stronger unfamiliarity for OR faces was found in medial occipital area. Moreover, our data provide further information on the time-course of face and non-face object processing.",
"title": ""
},
{
"docid": "efb9686dbd690109e8e5341043648424",
"text": "Because of the precise temporal resolution of electrophysiological recordings, the event-related potential (ERP) technique has proven particularly valuable for testing theories of perception and attention. Here, I provide a brief tutorial on the ERP technique for consumers of such research and those considering the use of human electrophysiology in their own work. My discussion begins with the basics regarding what brain activity ERPs measure and why they are well suited to reveal critical aspects of perceptual processing, attentional selection, and cognition, which are unobservable with behavioral methods alone. I then review a number of important methodological issues and often-forgotten facts that should be considered when evaluating or planning ERP experiments.",
"title": ""
},
{
"docid": "eda40814ecaecbe5d15ccba49f8a0d43",
"text": "The problem of achieving COnlUnCtlve goals has been central to domain-independent planning research, the nonhnear constraint-posting approach has been most successful Previous planners of this type have been comphcated, heurtstw, and ill-defined 1 have combmed and dtstdled the state of the art into a simple, precise, Implemented algorithm (TWEAK) which I have proved correct and complete 1 analyze previous work on domam-mdependent conlunctwe plannmg; tn retrospect tt becomes clear that all conluncttve planners, hnear and nonhnear, work the same way The efficiency and correctness of these planners depends on the traditional add/ delete-hst representation for actions, which drastically limits their usefulness I present theorems that suggest that efficient general purpose planning with more expressive action representations ts impossible, and suggest ways to avoid this problem",
"title": ""
},
{
"docid": "374674cc8a087d31ee2c801f7e49aa8d",
"text": "Two biological control agents, Bacillus subtilis AP-01 (Larminar(™)) and Trichoderma harzianum AP-001 (Trisan(™)) alone or/in combination were investigated in controlling three tobacco diseases, including bacterial wilt (Ralstonia solanacearum), damping-off (Pythium aphanidermatum), and frogeye leaf spot (Cercospora nicotiana). Tests were performed in greenhouse by soil sterilization prior to inoculation of the pathogens. Bacterial-wilt and damping off pathogens were drenched first and followed with the biological control agents and for comparison purposes, two chemical fungicides. But for frogeye leaf spot, which is an airborne fungus, a spraying procedure for every treatment including a chemical fungicide was applied instead of drenching. Results showed that neither B. subtilis AP-01 nor T harzianum AP-001 alone could control the bacterial wilt, but when combined, their controlling capabilities were as effective as a chemical treatment. These results were also similar for damping-off disease when used in combination. In addition, the combined B. subtilis AP-01 and T. harzianum AP-001 resulted in a good frogeye leaf spot control, which was not significantly different from the chemical treatment.",
"title": ""
},
{
"docid": "11b05bd0c0b5b9319423d1ec0441e8a7",
"text": "Today’s huge volumes of data, heterogeneous information and communication technologies, and borderless cyberinfrastructures create new challenges for security experts and law enforcement agencies investigating cybercrimes. The future of digital forensics is explored, with an emphasis on these challenges and the advancements needed to effectively protect modern societies and pursue cybercriminals.",
"title": ""
},
{
"docid": "851de4b014dfeb6f470876896b0416b3",
"text": "The design of bioinspired systems for chemical sensing is an engaging line of research in machine olfaction. Developments in this line could increase the lifetime and sensitivity of artificial chemo-sensory systems. Such approach is based on the sensory systems known in live organisms, and the resulting developed artificial systems are targeted to reproduce the biological mechanisms to some extent. Sniffing behaviour, sampling odours actively, has been studied recently in neuroscience, and it has been suggested that the respiration frequency is an important parameter of the olfactory system, since the odour perception, especially in complex scenarios such as novel odourants exploration, depends on both the stimulus identity and the sampling method. In this work we propose a chemical sensing system based on an array of 16 metal-oxide gas sensors that we combined with an external mechanical ventilator to simulate the biological respiration cycle. The tested gas classes formed a relatively broad combination of two analytes, acetone and ethanol, in binary mixtures. Two sets of lowfrequency and high-frequency features were extracted from the acquired signals to show that the high-frequency features contain information related to the gas class. In addition, such information is available at early stages of the measurement, which could make the technique ∗Corresponding author. Email address: [email protected] (Andrey Ziyatdinov) Preprint submitted to Sensors and Actuators B: Chemical August 15, 2014 suitable in early detection scenarios. The full data set is made publicly available to the community.",
"title": ""
},
{
"docid": "630baadcd861e58e0f36ba3a4e52ffd2",
"text": "The handshake gesture is an important part of the social etiquette in many cultures. It lies at the core of many human interactions, either in formal or informal settings: exchanging greetings, offering congratulations, and finalizing a deal are all activities that typically either start or finish with a handshake. The automated detection of a handshake can enable wide range of pervasive computing scanarios; in particular, different types of information can be exchanged and processed among the handshaking persons, depending on the physical/logical contexts where they are located and on their mutual acquaintance. This paper proposes a novel handshake detection system based on body sensor networks consisting of a resource-constrained wrist-wearable sensor node and a more capable base station. The system uses an effective collaboration technique among body sensor networks of the handshaking persons which minimizes errors associated with the application of classification algorithms and improves the overall accuracy in terms of the number of false positives and false negatives.",
"title": ""
},
{
"docid": "a166b3ed625a5d1db6c70ac41fbf1871",
"text": "The main challenge of online multi-object tracking is to reliably associate object trajectories with detections in each video frame based on their tracking history. In this work, we propose the Recurrent Autoregressive Network (RAN), a temporal generative modeling framework to characterize the appearance and motion dynamics of multiple objects over time. The RAN couples an external memory and an internal memory. The external memory explicitly stores previous inputs of each trajectory in a time window, while the internal memory learns to summarize long-term tracking history and associate detections by processing the external memory. We conduct experiments on the MOT 2015 and 2016 datasets to demonstrate the robustness of our tracking method in highly crowded and occluded scenes. Our method achieves top-ranked results on the two benchmarks.",
"title": ""
},
{
"docid": "b0989fb1775c486317b5128bc1c31c76",
"text": "Corporates are entering the brave new world of the internet and digitization without much regard for the fine print of a growing regulation regime. More traditional outsourcing arrangements are already falling foul of the regulators as rules and supervision intensifies. Furthermore, ‘shadow IT’ is proliferating as the attractions of SaaS, mobile, cloud services, social media, and endless new ‘apps’ drive usage outside corporate IT. Initial cost-benefit analyses of the Cloud make such arrangements look immediately attractive but losing control of architecture, security, applications and deployment can have far reaching and damaging regulatory consequences. From research in financial services, this paper details the increasing body of regulations, their inherent risks for businesses and how the dangers can be pre-empted and managed. We then delineate a model for managing these risks specifically focused on investigating, strategizing and governing outsourcing arrangements and related regulatory obligations.",
"title": ""
},
{
"docid": "8b39fe1fdfdc0426cc1c31ef2c825c58",
"text": "Approximate nonnegative matrix factorization is an emerging technique with a wide spectrum of potential applications in data analysis. Currently, the most-used algorithms for this problem are those proposed by Lee and Seung [7]. In this paper we present a variation of one of the Lee-Seung algorithms with a notably improved performance. We also show that algorithms of this type do not necessarily converge to local minima.",
"title": ""
},
{
"docid": "e2f5feaa4670bc1ae21d7c88f3d738e3",
"text": "Orofacial clefts are common birth defects and can occur as isolated, nonsyndromic events or as part of Mendelian syndromes. There is substantial phenotypic diversity in individuals with these birth defects and their family members: from subclinical phenotypes to associated syndromic features that is mirrored by the many genes that contribute to the etiology of these disorders. Identification of these genes and loci has been the result of decades of research using multiple genetic approaches. Significant progress has been made recently due to advances in sequencing and genotyping technologies, primarily through the use of whole exome sequencing and genome-wide association studies. Future progress will hinge on identifying functional variants, investigation of pathway and other interactions, and inclusion of phenotypic and ethnic diversity in studies.",
"title": ""
}
] |
scidocsrr
|
5ae31b1f3def0f1716b2d78df9aa3b18
|
Malacology: A Programmable Storage System
|
[
{
"docid": "41b83a85c1c633785766e3f464cbd7a6",
"text": "Distributed systems are easier to build than ever with the emergence of new, data-centric abstractions for storing and computing over massive datasets. However, similar abstractions do not exist for storing and accessing meta-data. To fill this gap, Tango provides developers with the abstraction of a replicated, in-memory data structure (such as a map or a tree) backed by a shared log. Tango objects are easy to build and use, replicating state via simple append and read operations on the shared log instead of complex distributed protocols; in the process, they obtain properties such as linearizability, persistence and high availability from the shared log. Tango also leverages the shared log to enable fast transactions across different objects, allowing applications to partition state across machines and scale to the limits of the underlying log without sacrificing consistency.",
"title": ""
}
] |
[
{
"docid": "8583ec1f457469dac3e2517a90a58423",
"text": "Sentiment analysis is the automatic classification of the overall opinion conveyed by a text towards its subject matter. This paper discusses an experiment in the sentiment analysis of of a collection of movie reviews that have been automatically translated to Indonesian. Following [1], we employ three well known classification techniques: naive bayes, maximum entropy, and support vector machines, employing unigram presence and frequency values as the features. The translation is achieved through machine translation and simple word substitutions based on a bilingual dictionary constructed from various online resources. Analysis of the Indonesian translations yielded an accuracy of up to 78.82%, still short of the accuracy for the English documents (80.09%), but satisfactorily high given the simple translation approach.",
"title": ""
},
{
"docid": "f6f00f03f195b1f03f15fffba2ccd199",
"text": "BACKGROUND\nRecent estimates concerning the prevalence of autistic spectrum disorder are much higher than those reported 30 years ago, with at least 1 in 400 children affected. This group of children and families have important service needs. The involvement of parents in implementing intervention strategies designed to help their autistic children has long been accepted as helpful. The potential benefits are increased skills and reduced stress for parents as well as children.\n\n\nOBJECTIVES\nThe objective of this review was to determine the extent to which parent-mediated early intervention has been shown to be effective in the treatment of children aged 1 year to 6 years 11 months with autistic spectrum disorder. In particular, it aimed to assess the effectiveness of such interventions in terms of the benefits for both children and their parents.\n\n\nSEARCH STRATEGY\nA range of psychological, educational and biomedical databases were searched. Bibliographies and reference lists of key articles were searched, field experts were contacted and key journals were hand searched.\n\n\nSELECTION CRITERIA\nOnly randomised or quasi-randomised studies were included. Study interventions had a significant focus on parent-implemented early intervention, compared to a group of children who received no treatment, a waiting list group or a different form of intervention. There was at least one objective, child related outcome measure.\n\n\nDATA COLLECTION AND ANALYSIS\nAppraisal of the methodological quality of included studies was carried out independently by two reviewers. Differences between the included studies in terms of the type of intervention, the comparison groups used and the outcome measures were too great to allow for direct comparison.\n\n\nMAIN RESULTS\nThe results of this review are based on data from two studies. Two significant results were found to favour parent training in one study: child language and maternal knowledge of autism. In the other, intensive intervention (involving parents, but primarily delivered by professionals) was associated with better child outcomes on direct measurement than were found for parent-mediated early intervention, but no differences were found in relation to measures of parent and teacher perceptions of skills and behaviours.\n\n\nREVIEWER'S CONCLUSIONS\nThis review has little to offer in the way of implications for practice: there were only two studies, the numbers of participants included were small, and the two studies could not be compared directly to one another. In terms of research, randomised controlled trials involving large samples need to be carried out, involving both short and long-term outcome information and full economic evaluations. Research in this area is hampered by barriers to randomisation, such as availability of equivalent services.",
"title": ""
},
{
"docid": "f285815e47ea0613fb1ceb9b69aee7df",
"text": "Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.",
"title": ""
},
{
"docid": "a09d03e2de70774f443d2da88a32b555",
"text": "Recently, CNN reported on the future of brain-computer interfaces (BCIs) [1]. Brain-computer interfaces are devices that process a user’s brain signals to allow direct communication and interaction with the environment. BCIs bypass the normal neuromuscular output pathways and rely on digital signal processing and machine learning to translate brain signals to action (Figure 1). Historically, BCIs were developed with biomedical applications in mind, such as restoring communication in completely paralyzed individuals and replacing lost motor function. More recent applications have targeted non-disabled individuals by exploring the use of BCIs as a novel input device for entertainment and gaming.",
"title": ""
},
{
"docid": "b2b4e5162b3d7d99a482f9b82820d59e",
"text": "Modern Internet-enabled smart lights promise energy efficiency and many additional capabilities over traditional lamps. However, these connected lights create a new attack surface, which can be maliciously used to violate users’ privacy and security. In this paper, we design and evaluate novel attacks that take advantage of light emitted by modern smart bulbs in order to infer users’ private data and preferences. The first two attacks are designed to infer users’ audio and video playback by a systematic observation and analysis of the multimediavisualization functionality of smart light bulbs. The third attack utilizes the infrared capabilities of such smart light bulbs to create a covert-channel, which can be used as a gateway to exfiltrate user’s private data out of their secured home or office network. A comprehensive evaluation of these attacks in various real-life settings confirms their feasibility and affirms the need for new privacy protection mechanisms.",
"title": ""
},
{
"docid": "f13000c4870a85e491f74feb20f9b2d4",
"text": "Complex Event Processing (CEP) is a stream processing model that focuses on detecting event patterns in continuous event streams. While the CEP model has gained popularity in the research communities and commercial technologies, the problem of gracefully degrading performance under heavy load in the presence of resource constraints, or load shedding, has been largely overlooked. CEP is similar to “classical” stream data management, but addresses a substantially different class of queries. This unfortunately renders the load shedding algorithms developed for stream data processing inapplicable. In this paper we study CEP load shedding under various resource constraints. We formalize broad classes of CEP load-shedding scenarios as different optimization problems. We demonstrate an array of complexity results that reveal the hardness of these problems and construct shedding algorithms with performance guarantees. Our results shed some light on the difficulty of developing load-shedding algorithms that maximize utility.",
"title": ""
},
{
"docid": "9696e2f6ff6e16f378ae377798ee3332",
"text": "0957-4174/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.eswa.2008.06.054 * Corresponding author. Address: School of Compu ogy, Beijing Jiaotong University, Beijing 100044, Chin E-mail address: [email protected] (J. Chen). As an important preprocessing technology in text classification, feature selection can improve the scalability, efficiency and accuracy of a text classifier. In general, a good feature selection method should consider domain and algorithm characteristics. As the Naïve Bayesian classifier is very simple and efficient and highly sensitive to feature selection, so the research of feature selection specially for it is significant. This paper presents two feature evaluation metrics for the Naïve Bayesian classifier applied on multiclass text datasets: Multi-class Odds Ratio (MOR), and Class Discriminating Measure (CDM). Experiments of text classification with Naïve Bayesian classifiers were carried out on two multi-class texts collections. As the results indicate, CDM and MOR gain obviously better selecting effect than other feature selection approaches. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cd3a01ec1db7672e07163534c0dfb32c",
"text": "In order to dock an embedded intelligent wheelchair into a U-shape bed automatically through visual servo, this paper proposes a real-time U-shape bed localization method on an embedded vision system based on FPGA and DSP. This method locates the U-shape bed through finding its line contours. The task can be done in a parallel way with FPGA does line extraction and DSP does line contour finding. Experiments show that, the speed and precision of the U-shape bed localization method proposed in this paper which based on an embedded vision system can satisfy the needs of the system.",
"title": ""
},
{
"docid": "1d44b1a7a9581981257a786ea38d54cb",
"text": "The decoder of the sphinx-4 speech recognition system incorporates several new design strategies which have not been used earlier in conventional decoders of HMM-based large vocabulary speech recognition systems. Some new design aspects include graph construction for multilevel parallel decoding with independent simultaneous feature streams without the use of compound HMMs, the incorporation of a generalized search algorithm that subsumes Viterbi and full-forward decoding as special cases, design of generalized language HMM graphs from grammars and language models of multiple standard formats, that toggles trivially from flat search structure to tree search structure etc. This paper describes some salient design aspects of the Sphinx-4 decoder and includes preliminary performance measures relating to speed and accu-",
"title": ""
},
{
"docid": "16880162165f4c95d6b01dc4cfc40543",
"text": "In this paper we present CMUcam3, a low-cost, open source, em bedded computer vision platform. The CMUcam3 is the third generation o f the CMUcam system and is designed to provide a flexible and easy to use ope n source development environment along with a more powerful hardware platfo rm. The goal of the system is to provide simple vision capabilities to small emb dded systems in the form of an intelligent sensor that is supported by an open sou rce community. The hardware platform consists of a color CMOS camera, a frame bu ff r, a low cost 32bit ARM7TDMI microcontroller, and an MMC memory card slot. T he CMUcam3 also includes 4 servo ports, enabling one to create entire, w orking robots using the CMUcam3 board as the only requisite robot processor. Cus tom C code can be developed using an optimized GNU toolchain and executabl es can be flashed onto the board using a serial port without external download ing hardware. The development platform includes a virtual camera target allowi ng for rapid application development exclusively on a PC. The software environment c omes with numerous open source example applications and libraries includi ng JPEG compression, frame differencing, color tracking, convolutions, histog ramming, edge detection, servo control, connected component analysis, FAT file syste m upport, and a face detector.",
"title": ""
},
{
"docid": "3ce1ed2dd4bacd96395b84c2b715d361",
"text": "In this paper, we present the design and fabrication of a centimeter-scale propulsion system for a robotic fish. The key to the design is selection of an appropriate actuator and a body frame that is simple and compact. SMA spring actuators are customized to provide the necessary work output for the microrobotic fish. The flexure joints, electrical wiring and attachment pads for SMA actuators are all embedded in a single layer of copper laminated polymer film, sandwiched between two layers of glass fiber. Instead of using individual actuators to rotate each joint, each actuator rotates all the joints to a certain mode shape and undulatory motion is created by a timed sequence of these mode shapes. Subcarangiform swimming mode of minnows has been emulated using five links and four actuators. The size of the four-joint propulsion system is 6 mm wide, 40 mm long with the body frame thickness of 0.25 mm.",
"title": ""
},
{
"docid": "e6662ebd9842e43bd31926ac171807ca",
"text": "INTRODUCTION\nDisruptions in sleep and circadian rhythms are observed in individuals with bipolar disorders (BD), both during acute mood episodes and remission. Such abnormalities may relate to dysfunction of the molecular circadian clock and could offer a target for new drugs.\n\n\nAREAS COVERED\nThis review focuses on clinical, actigraphic, biochemical and genetic biomarkers of BDs, as well as animal and cellular models, and highlights that sleep and circadian rhythm disturbances are closely linked to the susceptibility to BDs and vulnerability to mood relapses. As lithium is likely to act as a synchronizer and stabilizer of circadian rhythms, we will review pharmacogenetic studies testing circadian gene polymorphisms and prophylactic response to lithium. Interventions such as sleep deprivation, light therapy and psychological therapies may also target sleep and circadian disruptions in BDs efficiently for treatment and prevention of bipolar depression.\n\n\nEXPERT OPINION\nWe suggest that future research should clarify the associations between sleep and circadian rhythm disturbances and alterations of the molecular clock in order to identify critical targets within the circadian pathway. The investigation of such targets using human cellular models or animal models combined with 'omics' approaches are crucial steps for new drug development.",
"title": ""
},
{
"docid": "9e037018da3ebcd7967b9fbf07c83909",
"text": "Studying temporal dynamics of topics in social media is very useful to understand online user behaviors. Most of the existing work on this subject usually monitors the global trends, ignoring variation among communities. Since users from different communities tend to have varying tastes and interests, capturing communitylevel temporal change can improve the understanding and management of social content. Additionally, it can further facilitate the applications such as community discovery, temporal prediction and online marketing. However, this kind of extraction becomes challenging due to the intricate interactions between community and topic, and intractable computational complexity. In this paper, we take a unified solution towards the communitylevel topic dynamic extraction. A probabilistic model, CosTot (Community Specific Topics-over-Time) is proposed to uncover the hidden topics and communities, as well as capture community-specific temporal dynamics. Specifically, CosTot considers text, time, and network information simultaneously, and well discovers the interactions between community and topic over time. We then discuss the approximate inference implementation to enable scalable computation of model parameters, especially for large social data. Based on this, the application layer support for multi-scale temporal analysis and community exploration is also investigated. We conduct extensive experimental studies on a large real microblog dataset, and demonstrate the superiority of proposed model on tasks of time stamp prediction, link prediction and topic perplexity.",
"title": ""
},
{
"docid": "d1b20385d90fe1e98a07f9cf55af6adb",
"text": "Cerebellar cognitive affective syndrome (CCAS; Schmahmann's syndrome) is characterized by deficits in executive function, linguistic processing, spatial cognition, and affect regulation. Diagnosis currently relies on detailed neuropsychological testing. The aim of this study was to develop an office or bedside cognitive screen to help identify CCAS in cerebellar patients. Secondary objectives were to evaluate whether available brief tests of mental function detect cognitive impairment in cerebellar patients, whether cognitive performance is different in patients with isolated cerebellar lesions versus complex cerebrocerebellar pathology, and whether there are cognitive deficits that should raise red flags about extra-cerebellar pathology. Comprehensive standard neuropsychological tests, experimental measures and clinical rating scales were administered to 77 patients with cerebellar disease-36 isolated cerebellar degeneration or injury, and 41 complex cerebrocerebellar pathology-and to healthy matched controls. Tests that differentiated patients from controls were used to develop a screening instrument that includes the cardinal elements of CCAS. We validated this new scale in a new cohort of 39 cerebellar patients and 55 healthy controls. We confirm the defining features of CCAS using neuropsychological measures. Deficits in executive function were most pronounced for working memory, mental flexibility, and abstract reasoning. Language deficits included verb for noun generation and phonemic > semantic fluency. Visual spatial function was degraded in performance and interpretation of visual stimuli. Neuropsychiatric features included impairments in attentional control, emotional control, psychosis spectrum disorders and social skill set. From these results, we derived a 10-item scale providing total raw score, cut-offs for each test, and pass/fail criteria that determined 'possible' (one test failed), 'probable' (two tests failed), and 'definite' CCAS (three tests failed). When applied to the exploratory cohort, and administered to the validation cohort, the CCAS/Schmahmann scale identified sensitivity and selectivity, respectively as possible exploratory cohort: 85%/74%, validation cohort: 95%/78%; probable exploratory cohort: 58%/94%, validation cohort: 82%/93%; and definite exploratory cohort: 48%/100%, validation cohort: 46%/100%. In patients in the exploratory cohort, Mini-Mental State Examination and Montreal Cognitive Assessment scores were within normal range. Complex cerebrocerebellar disease patients were impaired on similarities in comparison to isolated cerebellar disease. Inability to recall words from multiple choice occurred only in patients with extra-cerebellar disease. The CCAS/Schmahmann syndrome scale is useful for expedited clinical assessment of CCAS in patients with cerebellar disorders.awx317media15678692096001.",
"title": ""
},
{
"docid": "be43ca444001f766e14dd042c411a34f",
"text": "During crowded events, cellular networks face voice and data traffic volumes that are often orders of magnitude higher than what they face during routine days. Despite the use of portable base stations for temporarily increasing communication capacity and free Wi-Fi access points for offloading Internet traffic from cellular base stations, crowded events still present significant challenges for cellular network operators looking to reduce dropped call events and improve Internet speeds. For effective cellular network design, management, and optimization, it is crucial to understand how cellular network performance degrades during crowded events, what causes this degradation, and how practical mitigation schemes would perform in real-life crowded events. This paper makes a first step towards this end by characterizing the operational performance of a tier-1 cellular network in the United States during two high-profile crowded events in 2012. We illustrate how the changes in population distribution, user behavior, and application workload during crowded events result in significant voice and data performance degradation, including more than two orders of magnitude increase in connection failures. Our findings suggest two mechanisms that can improve performance without resorting to costly infrastructure changes: radio resource allocation tuning and opportunistic connection sharing. Using trace-driven simulations, we show that more aggressive release of radio resources via 1-2 seconds shorter RRC timeouts as compared to routine days helps to achieve better tradeoff between wasted radio resources, energy consumption, and delay during crowded events; and opportunistic connection sharing can reduce connection failures by 95% when employed by a small number of devices in each cell sector.",
"title": ""
},
{
"docid": "be4b6d68005337457e66fcdf21a04733",
"text": "A real-time algorithm to detect eye blinks in a video sequence from a standard camera is proposed. Recent landmark detectors, trained on in-the-wild datasets exhibit excellent robustness against face resolution, varying illumination and facial expressions. We show that the landmarks are detected precisely enough to reliably estimate the level of the eye openness. The proposed algorithm therefore estimates the facial landmark positions, extracts a single scalar quantity – eye aspect ratio (EAR) – characterizing the eye openness in each frame. Finally, blinks are detected either by an SVM classifier detecting eye blinks as a pattern of EAR values in a short temporal window or by hidden Markov model that estimates the eye states followed by a simple state machine recognizing the blinks according to the eye closure lengths. The proposed algorithm has comparable results with the state-of-the-art methods on three standard datasets.",
"title": ""
},
{
"docid": "a7a51eb9cb434a581eac782da559094b",
"text": "An ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites. These pages are often referred to as the Hidden Web or the Deep Web. Since there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results. However, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users. In this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web. Since the only “entry point” to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site. Here, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically. Our policies proceed iteratively, issuing a different query in every iteration. We experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising. For instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries.",
"title": ""
},
{
"docid": "b18af8eaea5dd26cdac5d13c7aa6ce4c",
"text": "Botnet is one of the most serious threats to cyber security as it provides a distributed platform for several illegal activities. Regardless of the availability of numerous methods proposed to detect botnets, still it is a challenging issue as botmasters are continuously improving bots to make them stealthier and evade detection. Most of the existing detection techniques cannot detect modern botnets in an early stage, or they are specific to command and control protocol and structures. In this paper, we propose a novel approach to detect botnets irrespective of their structures, based on network traffic flow behavior analysis and machine learning techniques. The experimental evaluation of the proposed method with real-world benchmark datasets shows the efficiency of the method. Also, the system is able to identify the new botnets with high detection accuracy and low false positive rate. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
scidocsrr
|
5e1f70eb51a33d4220fd2a8a766ad212
|
Haptics: Perception, Devices, Control, and Applications
|
[
{
"docid": "d647fc2b5635a3dfcebf7843fef3434c",
"text": "Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better researchmethodologies, developing basic social touch building blocks, and solving specific ICT challenges.",
"title": ""
}
] |
[
{
"docid": "8dce819cc31cf4899cf4bad2dd117dc1",
"text": "BACKGROUND\nCaffeine and sodium bicarbonate ingestion have been suggested to improve high-intensity intermittent exercise, but it is unclear if these ergogenic substances affect performance under provoked metabolic acidification. To study the effects of caffeine and sodium bicarbonate on intense intermittent exercise performance and metabolic markers under exercise-induced acidification, intense arm-cranking exercise was performed prior to intense intermittent running after intake of placebo, caffeine and sodium bicarbonate.\n\n\nMETHODS\nMale team-sports athletes (n = 12) ingested sodium bicarbonate (NaHCO3; 0.4 g.kg(-1) b.w.), caffeine (CAF; 6 mg.kg(-1) b.w.) or placebo (PLA) on three different occasions. Thereafter, participants engaged in intense arm exercise prior to the Yo-Yo intermittent recovery test level-2 (Yo-Yo IR2). Heart rate, blood lactate and glucose as well as rating of perceived exertion (RPE) were determined during the protocol.\n\n\nRESULTS\nCAF and NaHCO3 elicited a 14 and 23% improvement (P < 0.05), respectively, in Yo-Yo IR2 performance, post arm exercise compared to PLA. The NaHCO3 trial displayed higher [blood lactate] (P < 0.05) compared to CAF and PLA (10.5 ± 1.9 vs. 8.8 ± 1.7 and 7.7 ± 2.0 mmol.L(-1), respectively) after the Yo-Yo IR2. At exhaustion CAF demonstrated higher (P < 0.05) [blood glucose] compared to PLA and NaHCO3 (5.5 ± 0.7 vs. 4.2 ± 0.9 vs. 4.1 ± 0.9 mmol.L(-1), respectively). RPE was lower (P < 0.05) during the Yo-Yo IR2 test in the NaHCO3 trial in comparison to CAF and PLA, while no difference in heart rate was observed between trials.\n\n\nCONCLUSIONS\nCaffeine and sodium bicarbonate administration improved Yo-Yo IR2 performance and lowered perceived exertion after intense arm cranking exercise, with greater overall effects of sodium bicarbonate intake.",
"title": ""
},
{
"docid": "53569b0225db62d4c627e41469cb91b8",
"text": "A first proof-of-concept mm-sized implant based on ultrasonic power transfer and RF uplink data transmission is presented. The prototype consists of a 1 mm × 1 mm piezoelectric receiver, a 1 mm × 2 mm chip designed in 65 nm CMOS and a 2.5 mm × 2.5 mm off-chip antenna, and operates through 3 cm of chicken meat which emulates human tissue. The implant supports a DC load power of 100 μW allowing for high-power applications. It also transmits consecutive UWB pulse sequences activated by the ultrasonic downlink data path, demonstrating sufficient power for an Mary PPM transmitter in uplink.",
"title": ""
},
{
"docid": "04549adc3e956df0f12240c4d9c02bd7",
"text": "Gamification, applying game mechanics to nongame contexts, has recently become a hot topic across a wide range of industries, and has been presented as a potential disruptive force in education. It is based on the premise that it can promote motivation and engagement and thus contribute to the learning process. However, research examining this assumption is scarce. In a set of studies we examined the effects of points, a basic element of gamification, on performance in a computerized assessment of mastery and fluency of basic mathematics concepts. The first study, with adult participants, found no effect of the point manipulation on accuracy of responses, although the speed of responses increased. In a second study, with 6e8 grade middle school participants, we found the same results for the two aspects of performance. In addition, middle school participants' reactions to the test revealed higher likeability ratings for the test under the points condition, but only in the first of the two sessions, and perceived effort during the test was higher in the points condition, but only for eighth grade students. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5e962b323daa883da2ae8416d1fc10fa",
"text": "Network security analysis and ensemble data visualization are two active research areas. Although they are treated as separate domains, they share many common challenges and characteristics. Both focus on scalability, time-dependent data analytics, and exploration of patterns and unusual behaviors in large datasets. These overlaps provide an opportunity to apply ensemble visualization research to improve network security analysis. To study this goal, we propose methods to interpret network security alerts and flow traffic as ensemble members. We can then apply ensemble visualization techniques in a network analysis environment to produce a network ensemble visualization system. Including ensemble representations provide new, in-depth insights into relationships between alerts and flow traffic. Analysts can cluster traffic with similar behavior and identify traffic with unusual patterns, something that is difficult to achieve with high-level overviews of large network datasets. Furthermore, our ensemble approach facilitates analysis of relationships between alerts and flow traffic, improves scalability, maintains accessibility and configurability, and is designed to fit our analysts' working environment, mental models, and problem solving strategies.",
"title": ""
},
{
"docid": "c8379f1382a191985cf55773d0cd02c9",
"text": "Utilizing Big Data scenarios that are generated from increasing digitization and data availability is a core topic in IS research. There are prospective advantages in generating business value from those scenarios through improved decision support and new business models. In order to harvest those potential advantages Big Data capabilities are required, including not only technological aspects of data management and analysis but also strategic and organisational aspects. To assess these capabilities, one can use capability assessment models. Employing a qualitative meta-analysis on existing capability assessment models, it can be revealed that the existing approaches greatly differ in their fundamental structure due to heterogeneous model elements. The heterogeneous elements are therefore synthesized and transformed into consistent assessment dimensions to fulfil the requirements of exhaustive and mutually exclusive aspects of a capability assessment model. As part of a broader research project to develop a consistent and harmonized Big Data Capability Assessment Model (BDCAM) a new design for a capability matrix is proposed including not only capability dimensions but also Big Data life cycle tasks in order to measure specific weaknesses along the process of data-driven value creation.",
"title": ""
},
{
"docid": "31049561dee81500d048641023bbb6dd",
"text": "Today’s networks are filled with a massive and ever-growing variety of network functions that coupled with proprietary devices, which leads to network ossification and difficulty in network management and service provision. Network Function Virtualization (NFV) is a promising paradigm to change such situation by decoupling network functions from the underlying dedicated hardware and realizing them in the form of software, which are referred to as Virtual Network Functions (VNFs). Such decoupling introduces many benefits which include reduction of Capital Expenditure (CAPEX) and Operation Expense (OPEX), improved flexibility of service provision, etc. In this paper, we intend to present a comprehensive survey on NFV, which starts from the introduction of NFV motivations. Then, we explain the main concepts of NFV in terms of terminology, standardization and history, and how NFV differs from traditional middlebox based network. After that, the standard NFV architecture is introduced using a bottom up approach, based on which the corresponding use cases and solutions are also illustrated. In addition, due to the decoupling of network functionalities and hardware, people’s attention is gradually shifted to the VNFs. Next, we provide an extensive and in-depth discussion on state-of-the-art VNF algorithms including VNF placement, scheduling, migration, chaining and multicast. Finally, to accelerate the NFV deployment and avoid pitfalls as far as possible, we survey the challenges faced by NFV and the trend for future directions. In particular, the challenges are discussed from bottom up, which include hardware design, VNF deployment, VNF life cycle control, service chaining, performance evaluation, policy enforcement, energy efficiency, reliability and security, and the future directions are discussed around the current trend towards network softwarization. © 2018 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "404c1f0cc09b5c429555332d90be67d3",
"text": "Activists have used social media during modern civil uprisings, and researchers have found that the generated content is predictive of off-line protest activity. However, questions remain regarding the drivers of this predictive power. In this paper, we begin by deriving predictor variables for individuals’ protest decisions from the literature on protest participation theory. We then test these variables on the case of Twitter and the 2011 Egyptian revolution. We find significant positive correlations between the volume of future-protest descriptions on Twitter and protest onsets. We do not find significant correlations between such onsets and the preceding volume of political expressions, individuals’ access to news, and connections with political activists. These results locate the predictive power of social media in its function as a protest advertisement and organization mechanism. We then build predictive models using future-protest descriptions and compare these models with baselines informed by daily event counts from the Global Database of Events, Location, and Tone (GDELT). Inspection of the significant variables in our GDELT models reveals that an increased military presence may be predictive of protest onsets in major cities. In sum, this paper highlights the ways in which online activism shapes off-line behavior during civil uprisings.",
"title": ""
},
{
"docid": "4a22a7dbcd1515e2b1b6e7748ffa3e02",
"text": "Average public feedback scores given to sellers have increased strongly over time in an online labor market. Changes in marketplace composition or improved seller performance cannot fully explain this trend. We propose that two factors inflated reputations: (1) it costs more to give bad feedback than good feedback and (2) this cost to raters is increasing in the cost to sellers from bad feedback. Together, (1) and (2) can lead to an equilibrium where feedback is always positive, regardless of performance. In response, the marketplace encouraged buyers to additionally give private feedback. This private feedback was substantially more candid and more predictive of future worker performance. When aggregates of private feedback about each job applicant were experimentally provided to employers as a private feedback score, employers used these scores when making screening and hiring decisions.",
"title": ""
},
{
"docid": "b4d4a42e261a5272a7865065b74ff91b",
"text": "The acquisition of Magnetic Resonance Imaging (MRI) is inherently slow. Inspired by recent advances in deep learning, we propose a framework for reconstructing MR images from undersampled data using a deep cascade of convolutional neural networks to accelerate the data acquisition process. We show that for Cartesian undersampling of 2D cardiac MR images, the proposed method outperforms the state-of-the-art compressed sensing approaches, such as dictionary learning-based MRI (DLMRI) reconstruction, in terms of reconstruction error, perceptual quality and reconstruction speed for both 3-fold and 6-fold undersampling. Compared to DLMRI, the error produced by the method proposed is approximately twice as small, allowing to preserve anatomical structures more faithfully. Using our method, each image can be reconstructed in 23ms, which is fast enough to enable real-time applications.",
"title": ""
},
{
"docid": "39b33447ab67263414e662f07f6a3c26",
"text": "Most of current concepts for a visual prosthesis are based on neuronal electrical stimulation at different locations along the visual pathways within the central nervous system. The different designs of visual prostheses are named according to their locations (i.e., cortical, optic nerve, subretinal, and epiretinal). Visual loss caused by outer retinal degeneration in diseases such as retinitis pigmentosa or age-related macular degeneration can be reversed by electrical stimulation of the retina or the optic nerve (retinal or optic nerve prostheses, respectively). On the other hand, visual loss caused by inner or whole thickness retinal diseases, eye loss, optic nerve diseases (tumors, ischemia, inflammatory processes etc.), or diseases of the central nervous system (not including diseases of the primary and secondary visual cortices) can be reversed by a cortical visual prosthesis. The intent of this article is to provide an overview of current and future concepts of retinal and optic nerve prostheses. This article will begin with general considerations that are related to all or most of visual prostheses and then concentrate on the retinal and optic nerve designs. The authors believe that the field has grown beyond the scope of a single article so cortical prostheses will be described only because of their direct effect on the concept and technical development of the other prostheses, and this will be done in a more general and historic perspective.",
"title": ""
},
{
"docid": "edfb50c784e6e7a89ce12d524f667398",
"text": "Unconventional machining processes (communally named advanced or modern machining processes) are widely used by manufacturing industries. These advanced machining processes allow producing complex profiles and high quality-products. However, several process parameters should be optimized to achieve this end. In this paper, the optimization of process parameters of two conventional and four advanced machining processes is investigated: drilling process, grinding process, abrasive jet machining (AJM), abrasive water jet machining (AWJM), ultrasonic machining (USM), and water jet machining (WJM), respectively. This research employed two bio-inspired algorithms called the cuckoo optimization algorithm (COA) and the hoopoe heuristic (HH) to optimize the machining control parameters of these processes. The obtained results are compared with other optimization algorithms described and applied in the literature.",
"title": ""
},
{
"docid": "047949b0dba35fb11f9f3b716893701d",
"text": "Many state-of-the-art segmentation algorithms rely on Markov or Conditional Random Field models designed to enforce spatial and global consistency constraints. This is often accomplished by introducing additional latent variables to the model, which can greatly increase its complexity. As a result, estimating the model parameters or computing the best maximum a posteriori (MAP) assignment becomes a computationally expensive task. In a series of experiments on the PASCAL and the MSRC datasets, we were unable to find evidence of a significant performance increase attributed to the introduction of such constraints. On the contrary, we found that similar levels of performance can be achieved using a much simpler design that essentially ignores these constraints. This more simple approach makes use of the same local and global features to leverage evidence from the image, but instead directly biases the preferences of individual pixels. While our investigation does not prove that spatial and consistency constraints are not useful in principle, it points to the conclusion that they should be validated in a larger context.",
"title": ""
},
{
"docid": "40fda9cba754c72f1fba17dd3a5759b2",
"text": "Humans can easily recognize handwritten words, after gaining basic knowledge of languages. This knowledge needs to be transferred to computers for automatic character recognition. The work proposed in this paper tries to automate recognition of handwritten hindi isolated characters using multiple classifiers. For feature extraction, it uses histogram of oriented gradients as one feature and profile projection histogram as another feature. The performance of various classifiers has been evaluated using theses features experimentally and quadratic SVM has been found to produce better results.",
"title": ""
},
{
"docid": "90b3e6aee6351b196445843ca8367a3b",
"text": "Modeling how visual saliency guides the deployment of atten tion over visual scenes has attracted much interest recently — among both computer v ision and experimental/computational researchers — since visual attention is a key function of both machine and biological vision systems. Research efforts in compute r vision have mostly been focused on modeling bottom-up saliency. Strong influences o n attention and eye movements, however, come from instantaneous task demands. Here , w propose models of top-down visual guidance considering task influences. The n ew models estimate the state of a human subject performing a task (here, playing video gam es), and map that state to an eye position. Factors influencing state come from scene gi st, physical actions, events, and bottom-up saliency. Proposed models fall into two categ ori s. In the first category, we use classical discriminative classifiers, including Reg ression, kNN and SVM. In the second category, we use Bayesian Networks to combine all the multi-modal factors in a unified framework. Our approaches significantly outperfor m 15 competing bottom-up and top-down attention models in predicting future eye fixat ions on 18,000 and 75,00 video frames and eye movement samples from a driving and a flig ht combat video game, respectively. We further test and validate our approaches o n 1.4M video frames and 11M fixations samples and in all cases obtain higher prediction s c re that reference models.",
"title": ""
},
{
"docid": "9c2ce030230ccd91fdbfbd9544596604",
"text": "The kind of causal inference seen in natural human thought can be \"algorithmitized\" to help produce human-level machine intelligence.",
"title": ""
},
{
"docid": "b7e5028bc6936e75692fbcebaacebb2c",
"text": "dent processes and domain-specific knowledge. Until recently, information extraction has leaned heavily on domain knowledge, which requires either manual engineering or manual tagging of examples (Miller et al. 1998; Soderland 1999; Culotta, McCallum, and Betz 2006). Semisupervised approaches (Riloff and Jones 1999, Agichtein and Gravano 2000, Rosenfeld and Feldman 2007) require only a small amount of hand-annotated training, but require this for every relation of interest. This still presents a knowledge engineering bottleneck, when one considers the unbounded number of relations in a diverse corpus such as the web. Shinyama and Sekine (2006) explored unsupervised relation discovery using a clustering algorithm with good precision, but limited scalability. The KnowItAll research group is a pioneer of a new paradigm, Open IE (Banko et al. 2007, Banko and Etzioni 2008), that operates in a totally domain-independent manner and at web scale. An Open IE system makes a single pass over its corpus and extracts a diverse set of relational tuples without requiring any relation-specific human input. Open IE is ideally suited to corpora such as the web, where the target relations are not known in advance and their number is massive. Articles",
"title": ""
},
{
"docid": "8c00b522ae9429f6f9cd7fb7174578ec",
"text": "We extend photometric stereo to make it work with internet images, which are typically associated with different viewpoints and significant noise. For popular tourism sites, thousands of images can be obtained from internet search engines. With these images, our method computes the global illumination for each image and the surface orientation at some scene points. The illumination information can then be used to estimate the weather conditions (such as sunny or cloudy) for each image, since there is a strong correlation between weather and scene illumination. We demonstrate our method on several challenging examples.",
"title": ""
},
{
"docid": "63096ce03a1aa0effffdf59e742b1653",
"text": "URB-754 (6-methyl-2-[(4-methylphenyl)amino]-1-benzoxazin-4-one) was identified as a new type of designer drug in illegal products. Though many of the synthetic cannabinoids detected in illegal products are known to have affinities for cannabinoid CB1/CB2 receptors, URB-754 was reported to inhibit an endocannabinoid deactivating enzyme. Furthermore, an unknown compound (N,5-dimethyl-N-(1-oxo-1-(p-tolyl)butan-2-yl)-2-(N'-(p-tolyl)ureido)benzamide), which is deduced to be the product of a reaction between URB-754 and a cathinone derivative 4-methylbuphedrone (4-Me-MABP), was identified along with URB-754 and 4-Me-MABP in the same product. It is of interest that the product of a reaction between two different types of designer drugs, namely, a cannabinoid-related designer drug and a cathinone-type designer drug, was found in one illegal product. In addition, 12 cannabimimetic compounds, 5-fluoropentyl-3-pyridinoylindole, JWH-307, JWH-030, UR-144, 5FUR-144 (synonym: XLR11), (4-methylnaphtyl)-JWH-022 [synonym: N-(5-fluoropentyl)-JWH-122], AM-2232, (4-methylnaphtyl)-AM-2201 (MAM-2201), N-(4-pentenyl)-JWH-122, JWH-213, (4-ethylnaphtyl)-AM-2201 (EAM-2201) and AB-001, were also detected herein as newly distributed designer drugs in Japan. Furthermore, a tryptamine derivative, 4-hydroxy-diethyltryptamine (4-OH-DET), was detected together with a synthetic cannabinoid, APINACA, in the same product.",
"title": ""
},
{
"docid": "4741e9e296bfec372df7779ffa4bb3e3",
"text": "Dry electrodes for impedimetric sensing of physiological parameters (such as ECG, EEG, and GSR) promise the ability for long duration monitoring. This paper describes the feasibility of a novel dry electrode interfacing using Patterned Vertical Carbon Nanotube (pvCNT) for physiological parameter sensing. The electrodes were fabricated on circular discs (φ = 10 mm) stainless steel substrate. Multiwalled electrically conductive carbon nanotubes were grown in pattered pillar formation of 100 μm squared with 50, 100, 200 and 500μm spacing. The heights of the pillars were between 1 to 1.5 mm. A comparative test with commercial ECG electrodes shows that pvCNT has lower electrical impedance, stable impedance over very long time, and comparable signal capture in vitro. Long duration study shows minimal degradation of impedance over 2 days period. The results demonstrate the feasibility of using pvCNT dry electrodes for physiological parameter sensing.",
"title": ""
},
{
"docid": "e648aa29c191885832b4deee5af9b5b5",
"text": "Development of controlled release transdermal dosage form is a complex process involving extensive research. Transdermal patches have been developed to improve clinical efficacy of the drug and to enhance patient compliance by delivering smaller amount of drug at a predetermined rate. This makes evaluation studies even more important in order to ensure their desired performance and reproducibility under the specified environmental conditions. These studies are predictive of transdermal dosage forms and can be classified into following types:",
"title": ""
}
] |
scidocsrr
|
c4f6df2578bf51cc16b5ba09ef53a67d
|
Retrieving Similar Styles to Parse Clothing
|
[
{
"docid": "9f635d570b827d68e057afcaadca791c",
"text": "Researches have verified that clothing provides information about the identity of the individual. To extract features from the clothing, the clothing region first must be localized or segmented in the image. At the same time, given multiple images of the same person wearing the same clothing, we expect to improve the effectiveness of clothing segmentation. Therefore, the identity recognition and clothing segmentation problems are inter-twined; a good solution for one aides in the solution for the other. We build on this idea by analyzing the mutual information between pixel locations near the face and the identity of the person to learn a global clothing mask. We segment the clothing region in each image using graph cuts based on a clothing model learned from one or multiple images believed to be the same person wearing the same clothing. We use facial features and clothing features to recognize individuals in other images. The results show that clothing segmentation provides a significant improvement in recognition accuracy for large image collections, and useful clothing masks are simultaneously produced. A further significant contribution is that we introduce a publicly available consumer image collection where each individual is identified. We hope this dataset allows the vision community to more easily compare results for tasks related to recognizing people in consumer image collections.",
"title": ""
}
] |
[
{
"docid": "23676a52e1ed03d7b5c751a9986a7206",
"text": "Considering the increasingly complex media landscape and diversity of use, it is important to establish a common ground for identifying and describing the variety of ways in which people use new media technologies. Characterising the nature of media-user behaviour and distinctive user types is challenging and the literature offers little guidance in this regard. Hence, the present research aims to classify diverse user behaviours into meaningful categories of user types, according to the frequency of use, variety of use and content preferences. To reach a common framework, a review of the relevant research was conducted. An overview and meta-analysis of the literature (22 studies) regarding user typology was established and analysed with reference to (1) method, (2) theory, (3) media platform, (4) context and year, and (5) user types. Based on this examination, a unified Media-User Typology (MUT) is suggested. This initial MUT goes beyond the current research literature, by unifying all the existing and various user type models. A common MUT model can help the Human–Computer Interaction community to better understand both the typical users and the diversification of media-usage patterns more qualitatively. Developers of media systems can match the users’ preferences more precisely based on an MUT, in addition to identifying the target groups in the developing process. Finally, an MUT will allow a more nuanced approach when investigating the association between media usage and social implications such as the digital divide. 2010 Elsevier Ltd. All rights reserved. 1 Difficulties in understanding media-usage behaviour have also arisen because of",
"title": ""
},
{
"docid": "a0c36cccd31a1bf0a1e7c9baa78dd3fa",
"text": "Rapid progress in machine learning and artificial intelligence (AI) has brought increasing attention to the potential impacts of AI technologies on society. In this paper we discuss one such potential impact: the problem of accidents in machine learning systems, defined as unintended and harmful behavior that may emerge from poor design of real-world AI systems. We present a list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function (“avoiding side effects” and “avoiding reward hacking”), an objective function that is too expensive to evaluate frequently (“scalable supervision”), or undesirable behavior during the learning process (“safe exploration” and “distributional shift”). We review previous work in these areas as well as suggesting research directions with a focus on relevance to cutting-edge AI systems. Finally, we consider the high-level question of how to think most productively about the safety of forward-looking",
"title": ""
},
{
"docid": "94aa50aa17651de3b9332d4556cd0ca3",
"text": "Fingerprinting of mobile device apps is currently an attractive and affordable data gathering technique. Even in the presence of encryption, it is possible to fingerprint a user's app by means of packet-level traffic analysis in which side-channel information is used to determine specific patterns in packets. Knowing the specific apps utilized by smartphone users is a serious privacy concern. In this study, we address the issue of defending against statistical traffic analysis of Android apps. First, we present a methodology for the identification of mobile apps using traffic analysis. Further, we propose confusion models in which we obfuscate packet lengths information leaked by mobile traffic, and we shape one class of app traffic to obscure its class features with minimum overhead. We assess the efficiency of our model using different apps and against a recently published approach for mobile apps classification. We focus on making it hard for intruders to differentiate between the altered app traffic and the actual one using statistical analysis. Additionally, we study the tradeoff between shaping cost and traffic privacy protection, specifically the needed overhead and realization feasibility. We were able to attain 91.1% classification accuracy. Using our obfuscation technique, we were able to reduce this accuracy to 15.78%.",
"title": ""
},
{
"docid": "da9ffb00398f6aad726c247e3d1f2450",
"text": "We propose noWorkflow, a tool that transparently captures provenance of scripts and enables reproducibility. Unlike existing approaches, noWorkflow is non-intrusive and does not require users to change the way they work – users need not wrap their experiments in scientific workflow systems, install version control systems, or instrument their scripts. The tool leverages Software Engineering techniques, such as abstract syntax tree analysis, reflection, and profiling, to collect different types of provenance, including detailed information about the underlying libraries. We describe how noWorkflow captures multiple kinds of provenance and the different classes of analyses it supports: graph-based visualization; differencing over provenance trails; and inference queries.",
"title": ""
},
{
"docid": "687dbb03f675f0bf70e6defa9588ae23",
"text": "This paper presents a novel method for discovering causal relations between events encoded in text. In order to determine if two events from the same sentence are in a causal relation or not, we first build a graph representation of the sentence that encodes lexical, syntactic, and semantic information. In a second step, we automatically extract multiple graph patterns (or subgraphs) from such graph representations and sort them according to their relevance in determining the causality between two events from the same sentence. Finally, in order to decide if these events are causal or not, we train a binary classifier based on what graph patterns can be mapped to the graph representation associated with the two events. Our experimental results show that capturing the feature dependencies of causal event relations using a graph representation significantly outperforms an existing method that uses a flat representation of features.",
"title": ""
},
{
"docid": "fcf1d5d56f52d814f0df3b02643ef71b",
"text": "The research work deals with an approach to perform texture and morphological based retrieval on a corpus of food grain images. The work has been carried out using Image Warping and Image analysis approach. The method has been employed to normalize food grain images and hence eliminating the effects of orientation using image warping technique with proper scaling. The images have been properly enhanced to reduce noise and blurring in image. Finally image has segmented applying proper segmentation methods so that edges may be detected effectively and thus rectification of the image has been done. The approach has been tested on sufficient number of food grain images of rice based on intensity, position and orientation. A digital image analysis algorithm based on color, morphological and textural features was developed to identify the six varieties rice seeds which are widely planted in Chhattisgarh region. Nine color and nine morphological and textural features were used for discriminant analysis. A back propagation neural network-based classifier was developed to identify the unknown grain types. The color and textural features were presented to the neural network for training purposes. The trained network was then used to identify the unknown grain types.",
"title": ""
},
{
"docid": "3bcf0e33007feb67b482247ef6702901",
"text": "Bitcoin is a popular cryptocurrency that records all transactions in a distributed append-only public ledger called blockchain. The security of Bitcoin heavily relies on the incentive-compatible proof-of-work (PoW) based distributed consensus protocol, which is run by the network nodes called miners. In exchange for the incentive, the miners are expected to maintain the blockchain honestly. Since its launch in 2009, Bitcoin economy has grown at an enormous rate, and it is now worth about 150 billions of dollars. This exponential growth in the market value of bitcoins motivate adversaries to exploit weaknesses for profit, and researchers to discover new vulnerabilities in the system, propose countermeasures, and predict upcoming trends. In this paper, we present a systematic survey that covers the security and privacy aspects of Bitcoin. We start by giving an overview of the Bitcoin system and its major components along with their functionality and interactions within the system. We review the existing vulnerabilities in Bitcoin and its major underlying technologies such as blockchain and PoW-based consensus protocol. These vulnerabilities lead to the execution of various security threats to the standard functionality of Bitcoin. We then investigate the feasibility and robustness of the state-of-the-art security solutions. Additionally, we discuss the current anonymity considerations in Bitcoin and the privacy-related threats to Bitcoin users along with the analysis of the existing privacy-preserving solutions. Finally, we summarize the critical open challenges, and we suggest directions for future research towards provisioning stringent security and privacy solutions for Bitcoin.",
"title": ""
},
{
"docid": "4d2c5785e60fa80febb176165622fca7",
"text": "In this paper, we propose a new algorithm to compute intrinsic means of organ shapes from 3D medical images. More specifically, we explore the feasibility of Karcher means in the framework of the large deformations by diffeomorphisms (LDDMM). This setting preserves the topology of the averaged shapes and has interesting properties to quantitatively describe their anatomical variability. Estimating Karcher means requires to perform multiple registrations between the averaged template image and the set of reference 3D images. Here, we use a recent algorithm based on an optimal control method to satisfy the geodesicity of the deformations at any step of each registration. We also combine this algorithm with organ specific metrics. We demonstrate the efficiency of our methodology with experimental results on different groups of anatomical 3D images. We also extensively discuss the convergence of our method and the bias due to the initial guess. A direct perspective of this work is the computation of 3D+time atlases.",
"title": ""
},
{
"docid": "73d3c622c98fba72ae2156df52c860d3",
"text": "We suggest analyzing neural networks through the prism of space constraints. We observe that most training algorithms applied in practice use bounded memory, which enables us to use a new notion introduced in the study of spacetime tradeoffs that we call mixing complexity. This notion was devised in order to measure the (in)ability to learn using a bounded-memory algorithm. In this paper we describe how we use mixing complexity to obtain new results on what can and cannot be learned using neural networks.",
"title": ""
},
{
"docid": "7e78dd27dd2d4da997ceef7e867b7cd2",
"text": "Extracting facial feature is a key step in facial expression recognition (FER). Inaccurate feature extraction very often results in erroneous categorizing of facial expressions. Especially in robotic application, environmental factors such as illumination variation may cause FER system to extract feature inaccurately. In this paper, we propose a robust facial feature point extraction method to recognize facial expression in various lighting conditions. Before extracting facial features, a face is localized and segmented from a digitized image frame. Face preprocessing stage consists of face normalization and feature region localization steps to extract facial features efficiently. As regions of interest corresponding to relevant features are determined, Gabor jets are applied based on Gabor wavelet transformation to extract the facial points. Gabor jets are more invariable and reliable than gray-level values, which suffer from ambiguity as well as illumination variation while representing local features. Each feature point can be matched by a phase-sensitivity similarity function in the relevant regions of interest. Finally, the feature values are evaluated from the geometric displacement of facial points. After tested using the AR face database and the database built in our lab, average facial expression recognition rates of 84.1% and 81.3% are obtained respectively.",
"title": ""
},
{
"docid": "4db8a0d39ef31b49f2b6d542a14b03a2",
"text": "Climate-smart agriculture is one of the techniques that maximizes agricultural outputs through proper management of inputs based on climatological conditions. Real-time weather monitoring system is an important tool to monitor the climatic conditions of a farm because many of the farms related problems can be solved by better understanding of the surrounding weather conditions. There are various designs of weather monitoring stations based on different technological modules. However, different monitoring technologies provide different data sets, thus creating vagueness in accuracy of the weather parameters measured. In this paper, a weather station was designed and deployed in an Edamame farm, and its meteorological data are compared with the commercial Davis Vantage Pro2 installed at the same farm. The results show that the lab-made weather monitoring system is equivalently efficient to measure various weather parameters. Therefore, the designed system welcomes low-income farmers to integrate it into their climate-smart farming practice.",
"title": ""
},
{
"docid": "59e29fa12539757b5084cab8f1e1b292",
"text": "This article addresses the problem of understanding mathematics described in natural language. Research in this area dates back to early 1960s. Several systems have so far been proposed to involve machines to solve mathematical problems of various domains like algebra, geometry, physics, mechanics, etc. This correspondence provides a state of the art technical review of these systems and approaches proposed by different research groups. A unified architecture that has been used in most of these approaches is identified and differences among the systems are highlighted. Significant achievements of each method are pointed out. Major strengths and weaknesses of the approaches are also discussed. Finally, present efforts and future trends in this research area are presented.",
"title": ""
},
{
"docid": "a73f07080a2f93a09b05b58184acf306",
"text": "This survey paper categorises, compares, and summarises from almost all published technical and review articles in automated fraud detection within the last 10 years. It defines the professional fraudster, formalises the main types and subtypes of known fraud, and presents the nature of data evidence collected within affected industries. Within the business context of mining the data to achieve higher cost savings, this research presents methods and techniques together with their problems. Compared to all related reviews on fraud detection, this survey covers much more technical articles and is the only one, to the best of our knowledge, which proposes alternative data and solutions from related domains.",
"title": ""
},
{
"docid": "103b9c101d18867f25d605bd0b314c51",
"text": "The human visual system is the most complex pattern recognition device known. In ways that are yet to be fully understood, the visual cortex arrives at a simple and unambiguous interpretation of data from the retinal image that is useful for the decisions and actions of everyday life. Recent advances in Bayesian models of computer vision and in the measurement and modeling of natural image statistics are providing the tools to test and constrain theories of human object perception. In turn, these theories are having an impact on the interpretation of cortical function.",
"title": ""
},
{
"docid": "eba9ec47b04e08ff2606efa9ffebb6f8",
"text": "OBJECTIVE\nThe incidence of neuroleptic malignant syndrome (NMS) is not known, but the frequency of its occurrence with conventional antipsychotic agents has been reported to vary from 0.02% to 2.44%.\n\n\nDATA SOURCES\nMEDLINE search conducted in January 2003 and review of references within the retrieved articles.\n\n\nDATA SYNTHESIS\nOur MEDLINE research yielded 68 cases (21 females and 47 males) of NMS associated with atypical antipsychotic drugs (clozapine, N = 21; risperidone, N = 23; olanzapine, N = 19; and quetiapine, N = 5). The fact that 21 cases of NMS with clozapine were found indicates that low occurrence of extrapyramidal symptoms (EPS) and low EPS-inducing potential do not prevent the occurrence of NMS and D(2) dopamine receptor blocking potential does not have direct correlation with the occurrence of NMS. One of the cardinal features of NMS is an increasing manifestation of EPS, and the conventional antipsychotic drugs are known to produce EPS in 95% or more of NMS cases. With atypical antipsychotic drugs, the incidence of EPS during NMS is of a similar magnitude.\n\n\nCONCLUSIONS\nFor NMS associated with atypical antipsychotic drugs, the mortality rate was lower than that with conventional antipsychotic drugs. However, the mortality rate may simply be a reflection of physicians' awareness and ensuing early treatment.",
"title": ""
},
{
"docid": "05760329560f86bf8008c0a952abd893",
"text": "Type 2 diabetes is rapidly increasing in prevalence worldwide, in concert with epidemics of obesity and sedentary behavior that are themselves tracking economic development. Within this broad pattern, susceptibility to diabetes varies substantially in association with ethnicity and nutritional exposures through the life-course. An evolutionary perspective may help understand why humans are so prone to this condition in modern environments, and why this risk is unequally distributed. A simple conceptual model treats diabetes risk as the function of two interacting traits, namely ‘metabolic capacity’ which promotes glucose homeostasis, and ‘metabolic load’ which challenges glucose homoeostasis. This conceptual model helps understand how long-term and more recent trends in body composition can be considered to have shaped variability in diabetes risk. Hominin evolution appears to have continued a broader trend evident in primates, towards lower levels of muscularity. In addition, hominins developed higher levels of body fatness, especially in females in relative terms. These traits most likely evolved as part of a broader reorganization of human life history traits in response to growing levels of ecological instability, enabling both survival during tough periods and reproduction during bountiful periods. Since the emergence of Homo sapiens, populations have diverged in body composition in association with geographical setting and local ecological stresses. These long-term trends in both metabolic capacity and adiposity help explain the overall susceptibility of humans to diabetes in ways that are similar to, and exacerbated by, the effects of nutritional exposures during the life-course.",
"title": ""
},
{
"docid": "c1b8beec6f2cb42b5a784630512525f3",
"text": "Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.",
"title": ""
},
{
"docid": "7ddf437114258023cc7d9c6d51bb8f94",
"text": "We describe a framework for cooperative control of a group of nonholonomic mobile robots that allows us to build complex systems from simple controllers and estimators. The resultant modular approach is attractive because of the potential for reusability. Our approach to composition also guarantees stability and convergence in a wide range of tasks. There are two key features in our approach: 1) a paradigm for switching between simple decentralized controllers that allows for changes in formation; 2) the use of information from a single type of sensor, an omnidirectional camera, for all our controllers. We describe estimators that abstract the sensory information at different levels, enabling both decentralized and centralized cooperative control. Our results include numerical simulations and experiments using a testbed consisting of three nonholonomic robots.",
"title": ""
},
{
"docid": "c760e6db820733dc3f57306eef81e5c9",
"text": "Recently, applying the novel data mining techniques for financial time-series forecasting has received much research attention. However, most researches are for the US and European markets, with only a few for Asian markets. This research applies Support-Vector Machines (SVMs) and Back Propagation (BP) neural networks for six Asian stock markets and our experimental results showed the superiority of both models, compared to the early researches.",
"title": ""
},
{
"docid": "49e1dc71e71b45984009f4ee20740763",
"text": "The ecosystem of open source software (OSS) has been growing considerably in size. In addition, code clones - code fragments that are copied and pasted within or between software systems - are also proliferating. Although code cloning may expedite the process of software development, it often critically affects the security of software because vulnerabilities and bugs can easily be propagated through code clones. These vulnerable code clones are increasing in conjunction with the growth of OSS, potentially contaminating many systems. Although researchers have attempted to detect code clones for decades, most of these attempts fail to scale to the size of the ever-growing OSS code base. The lack of scalability prevents software developers from readily managing code clones and associated vulnerabilities. Moreover, most existing clone detection techniques focus overly on merely detecting clones and this impairs their ability to accurately find \"vulnerable\" clones. In this paper, we propose VUDDY, an approach for the scalable detection of vulnerable code clones, which is capable of detecting security vulnerabilities in large software programs efficiently and accurately. Its extreme scalability is achieved by leveraging function-level granularity and a length-filtering technique that reduces the number of signature comparisons. This efficient design enables VUDDY to preprocess a billion lines of code in 14 hour and 17 minutes, after which it requires a few seconds to identify code clones. In addition, we designed a security-aware abstraction technique that renders VUDDY resilient to common modifications in cloned code, while preserving the vulnerable conditions even after the abstraction is applied. This extends the scope of VUDDY to identifying variants of known vulnerabilities, with high accuracy. In this study, we describe its principles and evaluate its efficacy and effectiveness by comparing it with existing mechanisms and presenting the vulnerabilities it detected. VUDDY outperformed four state-of-the-art code clone detection techniques in terms of both scalability and accuracy, and proved its effectiveness by detecting zero-day vulnerabilities in widely used software systems, such as Apache HTTPD and Ubuntu OS Distribution.",
"title": ""
}
] |
scidocsrr
|
0ee681e15fad3ce020562df645751692
|
Risk assessment model selection in construction industry
|
[
{
"docid": "ee0d11cbd2e723aff16af1c2f02bbc2b",
"text": "This study simplifies the complicated metric distance method [L.S. Chen, C.H. Cheng, Selecting IS personnel using ranking fuzzy number by metric distance method, Eur. J. Operational Res. 160 (3) 2005 803–820], and proposes an algorithm to modify Chen’s Fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) [C.T. Chen, Extensions of the TOPSIS for group decision-making under fuzzy environment, Fuzzy Sets Syst., 114 (2000) 1–9]. From experimental verification, Chen directly assigned the fuzzy numbers 1̃ and 0̃ as fuzzy positive ideal solution (PIS) and negative ideal solution (NIS). Chen’s method sometimes violates the basic concepts of traditional TOPSIS. This study thus proposes fuzzy hierarchical TOPSIS, which not only is well suited for evaluating fuzziness and uncertainty problems, but also can provide more objective and accurate criterion weights, while simultaneously avoiding the problem of Chen’s Fuzzy TOPSIS. For application and verification, this study presents a numerical example and build a practical supplier selection problem to verify our proposed method and compare it with other methods. 2008 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +886 5 5342601x5312; fax: +886 5 531 2077. E-mail addresses: [email protected] (J.-W. Wang), [email protected] (C.-H. Cheng), [email protected] (K.-C. Huang).",
"title": ""
}
] |
[
{
"docid": "14fb6228827657ba6f8d35d169ad3c63",
"text": "In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.",
"title": ""
},
{
"docid": "e06005f63efd6f8ca77f8b91d1b3b4a9",
"text": "Natural language generators for taskoriented dialogue must effectively realize system dialogue actions and their associated semantics. In many applications, it is also desirable for generators to control the style of an utterance. To date, work on task-oriented neural generation has primarily focused on semantic fidelity rather than achieving stylistic goals, while work on style has been done in contexts where it is difficult to measure content preservation. Here we present three different sequence-to-sequence models and carefully test how well they disentangle content and style. We use a statistical generator, PERSONAGE, to synthesize a new corpus of over 88,000 restaurant domain utterances whose style varies according to models of personality, giving us total control over both the semantic content and the stylistic variation in the training data. We then vary the amount of explicit stylistic supervision given to the three models. We show that our most explicit model can simultaneously achieve high fidelity to both semantic and stylistic goals: this model adds a context vector of 36 stylistic parameters as input to the hidden state of the encoder at each time step, showing the benefits of explicit stylistic supervision, even when the amount of training data is large.",
"title": ""
},
{
"docid": "0850f46a4bcbe1898a6a2dca9f61ea61",
"text": "Public opinion polarization is here conceived as a process of alignment along multiple lines of potential disagreement and measured as growing constraint in individuals' preferences. Using NES data from 1972 to 2004, the authors model trends in issue partisanship-the correlation of issue attitudes with party identification-and issue alignment-the correlation between pairs of issues-and find a substantive increase in issue partisanship, but little evidence of issue alignment. The findings suggest that opinion changes correspond more to a resorting of party labels among voters than to greater constraint on issue attitudes: since parties are more polarized, they are now better at sorting individuals along ideological lines. Levels of constraint vary across population subgroups: strong partisans and wealthier and politically sophisticated voters have grown more coherent in their beliefs. The authors discuss the consequences of partisan realignment and group sorting on the political process and potential deviations from the classic pluralistic account of American politics.",
"title": ""
},
{
"docid": "1742428cb9c72c90f94d0beb6ad3e086",
"text": "A key issue in face recognition is to seek an effective descriptor for representing face appearance. In the context of considering the face image as a set of small facial regions, this paper presents a new face representation approach coined spatial feature interdependence matrix (SFIM). Unlike classical face descriptors which usually use a hierarchically organized or a sequentially concatenated structure to describe the spatial layout features extracted from local regions, SFIM is attributed to the exploitation of the underlying feature interdependences regarding local region pairs inside a class specific face. According to SFIM, the face image is projected onto an undirected connected graph in a manner that explicitly encodes feature interdependence-based relationships between local regions. We calculate the pair-wise interdependence strength as the weighted discrepancy between two feature sets extracted in a hybrid feature space fusing histograms of intensity, local binary pattern and oriented gradients. To achieve the goal of face recognition, our SFIM-based face descriptor is embedded in three different recognition frameworks, namely nearest neighbor search, subspace-based classification, and linear optimization-based classification. Extensive experimental results on four well-known face databases and comprehensive comparisons with the state-of-the-art results are provided to demonstrate the efficacy of the proposed SFIM-based descriptor.",
"title": ""
},
{
"docid": "4d3468bb14b7ad933baac5c50feec496",
"text": "Conventional material removal techniques, like CNC milling, have been proven to be able to tackle nearly any machining challenge. On the other hand, the major drawback of using conventional CNC machines is the restricted working area and their produced shape limitation limitations. From a conceptual point of view, industrial robot technology could provide an excellent base for machining being both flexible and cost efficient. However, industrial machining robots lack absolute positioning accuracy, are unable to reject/absorb disturbances in terms of process forces and lack reliable programming and simulation tools to ensure right first time machining, at production startups. This paper reviews the penetration of industrial robots in the challenging field of machining.",
"title": ""
},
{
"docid": "68720a44720b4d80e661b58079679763",
"text": "The value of involving people as ‘users’ or ‘participants’ in the design process is increasingly becoming a point of debate. In this paper we describe a new framework, called ‘informant design’, which advocates efficiency of input from different people: maximizing the value of contributions tlom various informants and design team members at different stages of the design process. To illustrate how this can be achieved we describe a project that uses children and teachers as informants at difTerent stages to help us design an interactive learning environment for teaching ecology.",
"title": ""
},
{
"docid": "42903610920a47773627a33db25590f3",
"text": "We consider the analysis of Electroencephalography (EEG) and Local Field Potential (LFP) datasets, which are “big” in terms of the size of recorded data but rarely have sufficient labels required to train complex models (e.g., conventional deep learning methods). Furthermore, in many scientific applications, the goal is to be able to understand the underlying features related to the classification, which prohibits the blind application of deep networks. This motivates the development of a new model based on parameterized convolutional filters guided by previous neuroscience research; the filters learn relevant frequency bands while targeting synchrony, which are frequency-specific power and phase correlations between electrodes. This results in a highly expressive convolutional neural network with only a few hundred parameters, applicable to smaller datasets. The proposed approach is demonstrated to yield competitive (often state-of-the-art) predictive performance during our empirical tests while yielding interpretable features. Furthermore, a Gaussian process adapter is developed to combine analysis over distinct electrode layouts, allowing the joint processing of multiple datasets to address overfitting and improve generalizability. Finally, it is demonstrated that the proposed framework effectively tracks neural dynamics on children in a clinical trial on Autism Spectrum Disorder.",
"title": ""
},
{
"docid": "338e037f4ec9f6215f48843b9d03f103",
"text": "Sparse deep neural networks(DNNs) are efficient in both memory and compute when compared to dense DNNs. But due to irregularity in computation of sparse DNNs, their efficiencies are much lower than that of dense DNNs on general purpose hardwares. This leads to poor/no performance benefits for sparse DNNs. Performance issue for sparse DNNs can be alleviated by bringing structure to the sparsity and leveraging it for improving runtime efficiency. But such structural constraints often lead to sparse models with suboptimal accuracies. In this work, we jointly address both accuracy and performance of sparse DNNs using our proposed class of neural networks called HBsNN (Hierarchical Block sparse Neural Networks).",
"title": ""
},
{
"docid": "0d9057d8a40eb8faa7e67128a7d24565",
"text": "We develop efficient solution methods for a robust empirical risk minimization problem designed to give calibrated confidence intervals on performance and provide optimal tradeoffs between bias and variance. Our methods apply to distributionally robust optimization problems proposed by Ben-Tal et al., which put more weight on observations inducing high loss via a worst-case approach over a non-parametric uncertainty set on the underlying data distribution. Our algorithm solves the resulting minimax problems with nearly the same computational cost of stochastic gradient descent through the use of several carefully designed data structures. For a sample of size n, the per-iteration cost of our method scales as O(log n), which allows us to give optimality certificates that distributionally robust optimization provides at little extra cost compared to empirical risk minimization and stochastic gradient methods.",
"title": ""
},
{
"docid": "2ab8c692ef55d2501ff61f487f91da9c",
"text": "A common discussion subject for the male part of the population in particular, is the prediction of next weekend’s soccer matches, especially for the local team. Knowledge of offensive and defensive skills is valuable in the decision process before making a bet at a bookmaker. In this article we take an applied statistician’s approach to the problem, suggesting a Bayesian dynamic generalised linear model to estimate the time dependent skills of all teams in a league, and to predict next weekend’s soccer matches. The problem is more intricate than it may appear at first glance, as we need to estimate the skills of all teams simultaneously as they are dependent. It is now possible to deal with such inference problems using the iterative simulation technique known as Markov Chain Monte Carlo. We will show various applications of the proposed model based on the English Premier League and Division 1 1997-98; Prediction with application to betting, retrospective analysis of the final ranking, detection of surprising matches and how each team’s properties vary during the season.",
"title": ""
},
{
"docid": "9876e3f1438d39549a93ce4011bc0df0",
"text": "SIMON (Signal Interpretation and MONitoring) continuously collects, permanently stores, and processes bedside medical device data. Since 1998 SIMON has monitored over 3500 trauma intensive care unit (TICU) patients, representing approximately 250,000 hours of continuous monitoring and two billion data points, and is currently operational on all 14 TICU beds at Vanderbilt University Medical Center. This repository of dense physiologic data (heart rate, arterial, pulmonary, central venous, intracranial, and cerebral perfusion pressures, arterial and venous oxygen saturations, and other parameters sampled second-by-second) supports research to identify “new vital signs” features of patient physiology only observable through dense data capture and analysis more predictive of patient status than current measures. SIMON’s alerting and reporting capabilities, including web-based display, sentinel event notification via alphanumeric pagers, and daily summary reports of vital sign statistics, allow these discoveries to be rapidly tested and implemented in a working clinical environment. This",
"title": ""
},
{
"docid": "5ed409feee70554257e4974ab99674e0",
"text": "Text mining and information retrieval in large collections of scientific literature require automated processing systems that analyse the documents’ content. However, the layout of scientific articles is highly varying across publishers, and common digital document formats are optimised for presentation, but lack structural information. To overcome these challenges, we have developed a processing pipeline that analyses the structure a PDF document using a number of unsupervised machine learning techniques and heuristics. Apart from the meta-data extraction, which we reused from previous work, our system uses only information available from the current document and does not require any pre-trained model. First, contiguous text blocks are extracted from the raw character stream. Next, we determine geometrical relations between these blocks, which, together with geometrical and font information, are then used categorize the blocks into different classes. Based on this resulting logical structure we finally extract the body text and the table of contents of a scientific article. We separately evaluate the individual stages of our pipeline on a number of different datasets and compare it with other document structure analysis approaches. We show that it outperforms a state-of-the-art system in terms of the quality of the extracted body text and table of contents. Our unsupervised approach could provide a basis for advanced digital library scenarios that involve diverse and dynamic corpora.",
"title": ""
},
{
"docid": "012d6b86279e65237a3ad4515e4e439f",
"text": "The main purpose of the present study is to help managers cope with the negative effects of technostress on employee use of ICT. Drawing on transaction theory of stress (Cooper, Dewe, & O’Driscoll, 2001) and information systems (IS) continuance theory (Bhattacherjee, 2001) we investigate the effects of technostress on employee intentions to extend the use of ICT at work. Our results show that factors that create and inhibit technostress affect both employee satisfaction with the use of ICT and employee intentions to extend the use of ICT. Our findings have important implications for the management of technostress with regard to both individual stress levels and organizational performance. A key implication of our research is that managers should implement strategies for coping with technostress through the theoretical concept of technostress inhibitors. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "da64b7855ec158e97d48b31e36f100a5",
"text": "Named Entity Recognition (NER) is the task of classifying or labelling atomic elements in the text into categories such as Person, Location or Organisation. For Arabic language, recognizing named entities is a challenging task because of the complexity and the unique characteristics of this language. In addition, most of the previous work focuses on Modern Standard Arabic (MSA), however, recognizing named entities in social media is becoming more interesting these days. Dialectal Arabic (DA) and MSA are both used in social media, which is deemed as another challenging task. Most state-of-the-art Arabic NER systems count heavily on hand-crafted engineering features and lexicons which is time consuming. In this paper, we introduce a novel neural network architecture which benefits both from characterand word-level representations automatically, by using combination of bidirectional Long Short-Term Memory (LSTM) and Conditional Random Field (CRF), eliminating the need for most feature engineering. Moreover, our model relies on unsupervised word representations learned from unannotated corpora. Experimental results demonstrate that our model achieves state-of-the-art performance on publicly available benchmark for Arabic NER for social media and surpassing the previous system by a large margin.",
"title": ""
},
{
"docid": "8787335d8f5a459dc47b813fd385083b",
"text": "Human papillomavirus infection can cause a variety of benign or malignant oral lesions, and the various genotypes can cause distinct types of lesions. To our best knowledge, there has been no report of 2 different human papillomavirus-related oral lesions in different oral sites in the same patient before. This paper reported a patient with 2 different oral lesions which were clinically and histologically in accord with focal epithelial hyperplasia and oral papilloma, respectively. Using DNA extracted from these 2 different lesions, tissue blocks were tested for presence of human papillomavirus followed by specific polymerase chain reaction testing for 6, 11, 13, 16, 18, and 32 subtypes in order to confirm the clinical diagnosis. Finally, human papillomavirus-32-positive focal epithelial hyperplasia accompanying human papillomavirus-16-positive oral papilloma-like lesions were detected in different sites of the oral mucosa. Nucleotide sequence sequencing further confirmed the results. So in our clinical work, if the simultaneous occurrences of different human papillomavirus associated lesions are suspected, the multiple biopsies from different lesions and detection of human papillomavirus genotype are needed to confirm the diagnosis.",
"title": ""
},
{
"docid": "e643f7f29c2e96639a476abb1b9a38b1",
"text": "Weather forecasting has been one of the most scientifically and technologically challenging problem around the world. Weather data is one of the meteorological data that is rich with important information, which can be used for weather prediction We extract knowledge from weather historical data collected from Indian Meteorological Department (IMD) Pune. From the collected weather data comprising of 36 attributes, only 7 attributes are most relevant to rainfall prediction. We made data preprocessing and data transformation on raw weather data set, so that it shall be possible to work on Bayesian, the data mining, prediction model used for rainfall prediction. The model is trained using the training data set and has been tested for accuracy on available test data. The meteorological centers uses high performance computing and supercomputing power to run weather prediction model. To address the issue of compute intensive rainfall prediction model, we proposed and implemented data intensive model using data mining technique. Our model works with good accuracy and takes moderate compute resources to predict the rainfall. We have used Bayesian approach to prove our model for rainfall prediction, and found to be working well with good accuracy.",
"title": ""
},
{
"docid": "9b1cf7cb855ba95693b90efacc34ac6d",
"text": "Cellular electron cryo-tomography enables the 3D visualization of cellular organization in the near-native state and at submolecular resolution. However, the contents of cellular tomograms are often complex, making it difficult to automatically isolate different in situ cellular components. In this paper, we propose a convolutional autoencoder-based unsupervised approach to provide a coarse grouping of 3D small subvolumes extracted from tomograms. We demonstrate that the autoencoder can be used for efficient and coarse characterization of features of macromolecular complexes and surfaces, such as membranes. In addition, the autoencoder can be used to detect non-cellular features related to sample preparation and data collection, such as carbon edges from the grid and tomogram boundaries. The autoencoder is also able to detect patterns that may indicate spatial interactions between cellular components. Furthermore, we demonstrate that our autoencoder can be used for weakly supervised semantic segmentation of cellular components, requiring a very small amount of manual annotation.",
"title": ""
},
{
"docid": "37c8fa72d0959a64460dbbe4fdb8c296",
"text": "This paper presents a model which generates architectural layout for a single flat having regular shaped spaces; Bedroom, Bathroom, Kitchen, Balcony, Living and Dining Room. Using constraints at two levels; Topological (Adjacency, Compactness, Vaastu, and Open and Closed face constraints) and Dimensional (Length to Width ratio constraint), Genetic Algorithms have been used to generate the topological arrangement of spaces in the layout and further if required, feasibility have been dimensionally analyzed. Further easy evacuation form the selected layout in case of adversity has been proposed using Dijkstra's Algorithm. Later the proposed model has been tested for efficiency using various test cases. This paper also presents a classification and categorization of various problems of space planning.",
"title": ""
},
{
"docid": "27fd4240452fe6f08af7fbf86e8acdf5",
"text": "Intrinsically motivated goal exploration processes enable agents to autonomously sample goals to explore efficiently complex environments with high-dimensional continuous actions. They have been applied successfully to real world robots to discover repertoires of policies producing a wide diversity of effects. Often these algorithms relied on engineered goal spaces but it was recently shown that one can use deep representation learning algorithms to learn an adequate goal space in simple environments. However, in the case of more complex environments containing multiple objects or distractors, an efficient exploration requires that the structure of the goal space reflects the one of the environment. In this paper we show that using a disentangled goal space leads to better exploration performances than an entangled one. We further show that when the representation is disentangled, one can leverage it by sampling goals that maximize learning progress in a modular manner. Finally, we show that the measure of learning progress, used to drive curiosity-driven exploration, can be used simultaneously to discover abstract independently controllable features of the environment. The code used in the experiments is available at https://github.com/flowersteam/ Curiosity_Driven_Goal_Exploration.",
"title": ""
},
{
"docid": "f383dd5dd7210105406c2da80cf72f89",
"text": "We present a new, \"greedy\", channel-router that is quick, simple, and highly effective. It always succeeds, usually using no more than one track more than required by channel density. (It may be forced in rare cases to make a few connections \"off the end\" of the channel, in order to succeed.) It assumes that all pins and wiring lie on a common grid, and that vertical wires are on one layer, horizontal on another. The greedy router wires up the channel in a left-to-right, column-by-column manner, wiring each column completely before starting the next. Within each column the router tries to maximize the utility of the wiring produced, using simple, \"greedy\" heuristics. It may place a net on more than one track for a few columns, and \"collapse\" the net to a single track later on, using a vertical jog. It may also use a jog to move a net to a track closer to its pin in some future column. The router may occasionally add a new track to the channel, to avoid \"getting stuck\".",
"title": ""
}
] |
scidocsrr
|
fce938256c887fe3dbbc4d11deb49869
|
Deep packet inspection using Cuckoo filter
|
[
{
"docid": "3f569eccc71c6186d6163a2cc40be0fc",
"text": "Deep Packet Inspection (DPI) is the state-of-the-art technology for traffic classification. According to the conventional wisdom, DPI is the most accurate classification technique. Consequently, most popular products, either commercial or open-source, rely on some sort of DPI for traffic classification. However, the actual performance of DPI is still unclear to the research community, since the lack of public datasets prevent the comparison and reproducibility of their results. This paper presents a comprehensive comparison of 6 well-known DPI tools, which are commonly used in the traffic classification literature. Our study includes 2 commercial products (PACE and NBAR) and 4 open-source tools (OpenDPI, L7-filter, nDPI, and Libprotoident). We studied their performance in various scenarios (including packet and flow truncation) and at different classification levels (application protocol, application and web service). We carefully built a labeled dataset with more than 750 K flows, which contains traffic from popular applications. We used the Volunteer-Based System (VBS), developed at Aalborg University, to guarantee the correct labeling of the dataset. We released this dataset, including full packet payloads, to the research community. We believe this dataset could become a common benchmark for the comparison and validation of network traffic classifiers. Our results present PACE, a commercial tool, as the most accurate solution. Surprisingly, we find that some open-source tools, such as nDPI and Libprotoident, also achieve very high accuracy.",
"title": ""
},
{
"docid": "debea8166d89cd6e43d1b6537658de96",
"text": "The emergence of SDNs promises to dramatically simplify network management and enable innovation through network programmability. Despite all the hype surrounding SDNs, exploiting its full potential is demanding. Security is still the key concern and is an equally striking challenge that reduces the growth of SDNs. Moreover, the deployment of novel entities and the introduction of several architectural components of SDNs pose new security threats and vulnerabilities. Besides, the landscape of digital threats and cyber-attacks is evolving tremendously, considering SDNs as a potential target to have even more devastating effects than using simple networks. Security is not considered as part of the initial SDN design; therefore, it must be raised on the agenda. This article discusses the state-of-the-art security solutions proposed to secure SDNs. We classify the security solutions in the literature by presenting a thematic taxonomy based on SDN layers/interfaces, security measures, simulation environments, and security objectives. Moreover, the article points out the possible attacks and threat vectors targeting different layers/interfaces of SDNs. The potential requirements and their key enablers for securing SDNs are also identified and presented. Also, the article gives great guidance for secure and dependable SDNs. Finally, we discuss open issues and challenges of SDN security that may be deemed appropriate to be tackled by researchers and professionals in the future.",
"title": ""
},
{
"docid": "4a5ee2e22999f2353e055550f9b4f0c5",
"text": "As the popularity of software-defined networks (SDN) and OpenFlow increases, policy-driven network management has received more attention. Manual configuration of multiple devices is being replaced by an automated approach where a software-based, network-aware controller handles the configuration of all network devices. Software applications running on top of the network controller provide an abstraction of the topology and facilitate the task of operating the network. We propose OpenSec, an OpenFlow-based security framework that allows a network security operator to create and implement security policies written in human-readable language. Using OpenSec, the user can describe a flow in terms of OpenFlow matching fields, define which security services must be applied to that flow (deep packet inspection, intrusion detection, spam detection, etc.) and specify security levels that define how OpenSec reacts if malicious traffic is detected. In this paper, we first provide a more detailed explanation of how OpenSec converts security policies into a series of OpenFlow messages needed to implement such a policy. Second, we describe how the framework automatically reacts to security alerts as specified by the policies. Third, we perform additional experiments on the GENI testbed to evaluate the scalability of the proposed framework using existing datasets of campus networks. Our results show that up to 95% of attacks in an existing data set can be detected and 99% of malicious source nodes can be blocked automatically. Furthermore, we show that our policy specification language is simpler while offering fast translation times compared to existing solutions.",
"title": ""
}
] |
[
{
"docid": "ba4ffbb6c3dc865f803cbe31b52919c5",
"text": "This investigation is one in a series of studies that address the possibility of stroke rehabilitation using robotic devices to facilitate “adaptive training.” Healthy subjects, after training in the presence of systematically applied forces, typically exhibit a predictable “after-effect.” A critical question is whether this adaptive characteristic is preserved following stroke so that it might be exploited for restoring function. Another important question is whether subjects benefit more from training forces that enhance their errors than from forces that reduce their errors. We exposed hemiparetic stroke survivors and healthy age-matched controls to a pattern of disturbing forces that have been found by previous studies to induce a dramatic adaptation in healthy individuals. Eighteen stroke survivors made 834 movements in the presence of a robot-generated force field that pushed their hands proportional to its speed and perpendicular to its direction of motion — either clockwise or counterclockwise. We found that subjects could adapt, as evidenced by significant after-effects. After-effects were not correlated with the clinical scores that we used for measuring motor impairment. Further examination revealed that significant improvements occurred only when the training forces magnified the original errors, and not when the training forces reduced the errors or were zero. Within this constrained experimental task we found that error-enhancing therapy (as opposed to guiding the limb closer to the correct path) to be more effective than therapy that assisted the subject.",
"title": ""
},
{
"docid": "8824df4c6f2a95cbb43b45a56a2657d1",
"text": "Semi-supervised classification has become an active topic recently and a number of algorithms, such as Self-training, have been proposed to improve the performance of supervised classification using unlabeled data. In this paper, we propose a semi-supervised learning framework which combines clustering and classification. Our motivation is that clustering analysis is a powerful knowledge-discovery tool and it may clustering is integrated into Self-training classification to help train a better classifier. In particular, the semi-supervised fuzzy c-means algorithm and support vector machines are used for clustering and classification, respectively. Experimental results on artificial and real datasets demonstrate the advantages of the proposed framework. & 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1c259f13e6ff506cb72fb5d1790c4a42",
"text": "oastal areas, the place where the waters of the seas meet the land are indeed unique places in our global geography. They are endowed with a very wide range of coastal ecosystems like mangroves, coral reefs, lagoons, sea grass, salt marsh, estuary etc. They are unique in a very real economic sense as sites for port and harbour facilities that capture the large monetary benefits associated with waterborne commerce and are highly valued and greatly attractive as sites for resorts and as vacation destinations. The combination of freshwater and salt water in coastal estuaries creates some of the most productive and richest habitats on earth; the resulting bounty in fishes and other marine life can be of great value to coastal nations. In many locations, the coastal topography formed over the millennia provides significant protection from hurricanes, typhoons, and other ocean related disturbances. But these values could diminish or even be lost, if they are not managed. Pollution of coastal waters can greatly reduce the production of fish, as can degradation of coastal nursery grounds and other valuable wetland habitats. The storm protection afforded by fringing reefs and mangrove forests can be lost if the corals die or the mangroves removed. Inappropriate development and accompanying despoilment can reduce the attractiveness of the coastal environment, greatly affecting tourism potential. Even ports and harbours require active management if they are to remain productive and successful over the long term. Coastal ecosystem management is thus immensely important for the sustainable use, development and protection of the coastal and marine areas and resources. To achieve this, an understanding of the coastal processes that influence the coastal environments and the ways in which they interact is necessary. It is advantageous to adopt a holistic or systematic approach for solving the coastal problems, since understanding the processes and products of interaction in coastal environments is very complicated. A careful assessment of changes that occur in the coastal C",
"title": ""
},
{
"docid": "4eb1e1ff406e6520defc293fd8b36cc5",
"text": "Biodegradation of two superabsorbent polymers, a crosslinked, insoluble polyacrylate and an insoluble polyacrylate/ polyacrylamide copolymer, in soil by the white-rot fungus, Phanerochaete chrysosporium was investigated. The polymers were both solubilized and mineralized by the fungus but solubilization and mineralization of the copolymer was much more rapid than of the polyacrylate. Soil microbes poorly solublized the polymers and were unable to mineralize either intact polymer. However, soil microbes cooperated with the fungus during polymer degradation in soil, with the fungus solubilizing the polymers and the soil microbes stimulating mineralization. Further, soil microbes were able to significantly mineralize both polymers after solubilization by P. chrysosporium grown under conditions that produced fungal peroxidases or cellobiose dehydrogenase, or after solubilization by photochemically generated Fenton reagent. The results suggest that biodegradation of these polymers in soil is best under conditions that maximize solubilization.",
"title": ""
},
{
"docid": "112f10eb825a484850561afa7c23e71f",
"text": "We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.",
"title": ""
},
{
"docid": "8b8f4ddff20f2321406625849af8766a",
"text": "This paper provides an introduction to specifying multilevel models using PROC MIXED. After a brief introduction to the field of multilevel modeling, users are provided with concrete examples of how PROC MIXED can be used to estimate (a) two-level organizational models, (b) two-level growth models, and (c) three-level organizational models. Both random intercept and random intercept and slope models are illustrated. Examples are shown using different real world data sources, including the publically available Early Childhood Longitudinal Study–Kindergarten cohort data. For each example, different research questions are examined through both narrative explanations and examples of the PROC MIXED code and corresponding output.",
"title": ""
},
{
"docid": "0b106d338b55650fc108c97bcb7aeb13",
"text": "Emerging applications face the need to store and analyze interconnected data that are naturally depicted as graphs. Recent proposals take the idea of data cubes that have been successfully applied to multidimensional data and extend them to work for interconnected datasets. In our work we revisit the graph cube framework and propose novel mechanisms inspired from information theory in order to help the analyst quickly locate interesting relationships within the rich information contained in the graph cube. The proposed entropy-based filtering of data reveals irregularities and non-uniformity which are often what the decision maker is looking for. We experimentally validate our techniques and demonstrate that the proposed entropy-based filtering can help eliminate large portions of the respective graph cubes.",
"title": ""
},
{
"docid": "968ea2dcfd30492a81a71be25f16e350",
"text": "Tree-structured data are becoming ubiquitous nowadays and manipulating them based on similarity is essential for many applications. The generally accepted similarity measure for trees is the edit distance. Although similarity search has been extensively studied, searching for similar trees is still an open problem due to the high complexity of computing the tree edit distance. In this paper, we propose to transform tree-structured data into an approximate numerical multidimensional vector which encodes the original structure information. We prove that the L1 distance of the corresponding vectors, whose computational complexity is O(|T1| + |T2|), forms a lower bound for the edit distance between trees. Based on the theoretical analysis, we describe a novel algorithm which embeds the proposed distance into a filter-and-refine framework to process similarity search on tree-structured data. The experimental results show that our algorithm reduces dramatically the distance computation cost. Our method is especially suitable for accelerating similarity query processing on large trees in massive datasets.",
"title": ""
},
{
"docid": "f32ff72da2f90ed0e5279815b0fb10e0",
"text": "We investigate the application of non-orthogonal multiple access (NOMA) with successive interference cancellation (SIC) in downlink multiuser multiple-input multiple-output (MIMO) cellular systems, where the total number of receive antennas at user equipment (UE) ends in a cell is more than the number of transmit antennas at the base station (BS). We first dynamically group the UE receive antennas into a number of clusters equal to or more than the number of BS transmit antennas. A single beamforming vector is then shared by all the receive antennas in a cluster. We propose a linear beamforming technique in which all the receive antennas can significantly cancel the inter-cluster interference. On the other hand, the receive antennas in each cluster are scheduled on the power domain NOMA basis with SIC at the receiver ends. For inter-cluster and intra-cluster power allocation, we provide dynamic power allocation solutions with an objective to maximizing the overall cell capacity. An extensive performance evaluation is carried out for the proposed MIMO-NOMA system and the results are compared with those for conventional orthogonal multiple access (OMA)-based MIMO systems and other existing MIMO-NOMA solutions. The numerical results quantify the capacity gain of the proposed MIMO-NOMA model over MIMO-OMA and other existing MIMO-NOMA solutions.",
"title": ""
},
{
"docid": "29822df06340218a43fbcf046cbeb264",
"text": "Twitter provides search services to help people find new users to follow by recommending popular users or their friends' friends. However, these services do not offer the most relevant users to follow for a user. Furthermore, Twitter does not provide yet the search services to find the most interesting tweet messages for a user either. In this paper, we propose TWITOBI, a recommendation system for Twitter using probabilistic modeling for collaborative filtering which can recommend top-K users to follow and top-K tweets to read for a user. Our novel probabilistic model utilizes not only tweet messages but also the relationships between users. We develop an estimation algorithm for learning our model parameters and present its parallelized algorithm using MapReduce to handle large data. Our performance study with real-life data sets confirms the effectiveness and scalability of our algorithms.",
"title": ""
},
{
"docid": "db529d673e5d1b337e98e05eeb35fcf4",
"text": "Human group activities detection in multi-camera CCTV surveillance videos is a pressing demand on smart surveillance. Previous works on this topic are mainly based on camera topology inference that is hard to apply to real-world unconstrained surveillance videos. In this paper, we propose a new approach for multi-camera group activities detection. Our approach simultaneously exploits intra-camera and inter-camera contexts without topology inference. Specifically, a discriminative graphical model with hidden variables is developed. The intra-camera and inter-camera contexts are characterized by the structure of hidden variables. By automatically optimizing the structure, the contexts are effectively explored. Furthermore, we propose a new spatiotemporal feature, named vigilant area (VA), to characterize the quantity and appearance of the motion in an area. This feature is effective for group activity representation and is easy to extract from a dynamic and crowded scene. We evaluate the proposed VA feature and discriminative graphical model extensively on two real-world multi-camera surveillance video data sets, including a public corpus consisting of 2.5 h of videos and a 468-h video collection, which, to the best of our knowledge, is the largest video collection ever used in human activity detection. The experimental results demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "b15c689ff3dd7b2e7e2149e73b5451ac",
"text": "The Web provides a fertile ground for word-of-mouth communication and more and more consumers write about and share product-related experiences online. Given the experiential nature of tourism, such first-hand knowledge communicated by other travelers is especially useful for travel decision making. However, very little is known about what motivates consumers to write online travel reviews. A Web-based survey using an online consumer panel was conducted to investigate consumers’ motivations to write online travel reviews. Measurement scales to gauge the motivations to contribute online travel reviews were developed and tested. The results indicate that online travel review writers are mostly motivated by helping a travel service provider, concerns for other consumers, and needs for enjoyment/positive self-enhancement. Venting negative feelings through postings is clearly not seen as an important motive. Motivational differences were found for gender and income level. Implications of the findings for online travel communities and tourism marketers are discussed.",
"title": ""
},
{
"docid": "de298bb631dd0ca515c161b6e6426a85",
"text": "We address the problem of sharpness enhancement of images. Existing hierarchical techniques that decompose an image into a smooth image and high frequency components based on Gaussian filter and bilateral filter suffer from halo effects, whereas techniques based on weighted least squares extract low contrast features as detail. Other techniques require multiple images and are not tolerant to noise.",
"title": ""
},
{
"docid": "ae5df62bc13105298ae28d11a0a92ffa",
"text": "This work presents an approach for estimating the effect of the fractional-N phase locked loop (Frac-N PLL) phase noise profile on frequency modulated continuous wave (FMCW) radar precision. Unlike previous approaches, the proposed modelling method takes the actual shape of the phase noise profile into account leading to insights on the main regions dominating the precision. Estimates from the proposed model are in very good agreement with statistical simulations and measurement results from an FMCW radar test chip fabricated on an IBM7WL BiCMOS 0.18 μm technology. At 5.8 GHz center frequency, a close-in phase noise of −75 dBc/Hz at 1 kHz offset is measured. A root mean squared (RMS) chirp nonlinearity error of 14.6 kHz and a ranging precision of 0.52 cm are achieved which competes with state-of-the-art FMCW secondary radars.",
"title": ""
},
{
"docid": "0ed9c61670394b46c657593d71aa25e4",
"text": "We developed a novel computational framework to predict the perceived trustworthiness of host profile texts in the context of online lodging marketplaces. To achieve this goal, we developed a dataset of 4,180 Airbnb host profiles annotated with perceived trustworthiness. To the best of our knowledge, the dataset along with our models allow for the first computational evaluation of perceived trustworthiness of textual profiles, which are ubiquitous in online peer-to-peer marketplaces. We provide insights into the linguistic factors that contribute to higher and lower perceived trustworthiness for profiles of different lengths.",
"title": ""
},
{
"docid": "c2aa986c09f81c6ab54b0ac117d03afb",
"text": "Many companies have developed strategies that include investing heavily in information technology (IT) in order to enhance their performance. Yet, this investment pays off for some companies but not others. This study proposes that organization learning plays a significant role in determining the outcomes of IT. Drawing from resource theory and IT literature, the authors develop the concept of IT competency. Using structural equations modeling with data collected from managers in 271 manufacturing firms, they show that organizational learning plays a significant role in mediating the effects of IT competency on firm performance. Copyright 2003 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "fec4e9bb14c0071e64a5d6c25281a8d1",
"text": "Currently, we are witnessing a growing trend in the study and application of problems in the framework of Big Data. This is mainly due to the great advantages which come from the knowledge extraction from a high volume of information. For this reason, we observe a migration of the standard Data Mining systems towards a new functional paradigm that allows at working with Big Data. By means of the MapReduce model and its different extensions, scalability can be successfully addressed, while maintaining a good fault tolerance during the execution of the algorithms. Among the different approaches used in Data Mining, those models based on fuzzy systems stand out for many applications. Among their advantages, we must stress the use of a representation close to the natural language. Additionally, they use an inference model that allows a good adaptation to different scenarios, especially those with a given degree of uncertainty. Despite the success of this type of systems, their migration to the Big Data environment in the different learning areas is at a preliminary stage yet. In this paper, we will carry out an overview of the main existing proposals on the topic, analyzing the design of these models. Additionally, we will discuss those problems related to the data distribution and parallelization of the current algorithms, and also its relationship with the fuzzy representation of the information. Finally, we will provide our view on the expectations for the future in this framework according to the design of those methods based on fuzzy sets, as well as the open challenges on the topic.",
"title": ""
},
{
"docid": "bc94f3c157857f7f1581c9d23e65ad87",
"text": "The increasing development of educational games as Learning Objects is an important resource for educators to motivate and engage students in several online courses using Learning Management Systems. Students learn more, and enjoy themselves more when they are actively involved, rather than just passive listeners. This article has the aim to discuss about how games may motivate learners, enhancing the teaching-learning process as well as assist instructor designers to develop educational games in order to engage students, based on theories of motivational design and emotional engagement. Keywords— learning objects; games; Learning Management Systems, motivational design, emotional engagement.",
"title": ""
},
{
"docid": "5d0a77058d6b184cb3c77c05363c02e0",
"text": "For two-class discrimination, Ref. [1] claimed that, when covariance matrices of the two classes were unequal, a (class) unbalanced dataset had a negative effect on the performance of linear discriminant analysis (LDA). Through re-balancing 10 realworld datasets, Ref. [1] provided empirical evidence to support the claim using AUC (Area Under the receiver operating characteristic Curve) as the performance metric. We suggest that such a claim is vague if not misleading, there is no solid theoretical analysis presented in [1], and AUC can lead to a quite different conclusion from that led to by misclassification error rate (ER) on the discrimination performance of LDA for unbalanced datasets. Our empirical and simulation studies suggest that, for LDA, the increase of the median of AUC (and thus the improvement of performance of LDA) from re-balancing is relatively small, while, in contrast, the increase of the median of ER (and thus the decline in performance of LDA) from re-balancing is relatively large. Therefore, from our study, there is no reliable empirical evidence to support the claim that a (class) unbalanced data set has a negative effect on the performance of LDA. In addition, re-balancing affects the performance of LDA for datasets with either equal or unequal covariance matrices, indicating that having unequal covariance matrices is not a key reason for the difference in performance between original and re-balanced data.",
"title": ""
},
{
"docid": "d1940d5db7b1c8b9c7b4ca6ac9463147",
"text": "In the new interconnected world, we need to secure vehicular cyber-physical systems (VCPS) using sophisticated intrusion detection systems. In this article, we present a novel distributed intrusion detection system (DIDS) designed for a vehicular ad hoc network (VANET). By combining static and dynamic detection agents, that can be mounted on central vehicles, and a control center where the alarms about possible attacks on the system are communicated, the proposed DIDS can be used in both urban and highway environments for real time anomaly detection with good accuracy and response time.",
"title": ""
}
] |
scidocsrr
|
c1d39b7a58c4b82c78924481e351db2b
|
RFID technology: Beyond cash-based methods in vending machine
|
[
{
"docid": "31fc886990140919aabce17aa7774910",
"text": "Today, at the low end of the communication protocols we find the inter-integrated circuit (I2C) and the serial peripheral interface (SPI) protocols. Both protocols are well suited for communications between integrated circuits for slow communication with on-board peripherals. The two protocols coexist in modern digital electronics systems, and they probably will continue to compete in the future, as both I2C and SPI are actually quite complementary for this kind of communication.",
"title": ""
},
{
"docid": "8eeeb23e7e1bf5ad3300c9a46f080829",
"text": "The purpose of this paper is to develop a wireless system to detect and maintain the attendance of a student and locate a student. For, this the students ID (identification) card is tagged with an Radio-frequency identification (RFID) passive tag which is matched against the database and only finalized once his fingerprint is verified using the biometric fingerprint scanner. The guardian is intimated by a sms (short message service) sent using the GSM (Global System for Mobile Communications) Modem of the same that is the student has reached the university or not on a daily basis, in the present day every guardian is worried whether his child has reached safely or not. In every classroom, laboratory, libraries, staffrooms etc. a RFID transponder is installed through which we will be detecting the location of the student and staff. There will be a website through which the student, teacher and the guardians can view the status of attendance and location of a student at present in the campus. A person needs to be located can be done by two means that is via the website or by sending the roll number of the student as an sms to the GSM modem which will reply by taking the last location stored of the student in the database.",
"title": ""
},
{
"docid": "5f5d5f4dc35f06eaedb19d1841d7ab3d",
"text": "In this paper we propose an implementation technique for sequential circuit using single electron tunneling technology (SET-s) with the example of designing of a “coffee vending machine” with the goal of getting low power and faster operation. We implement the proposed design based on single electron encoded logic (SEEL).The circuit is tested and compared with the existing CMOS technology.",
"title": ""
}
] |
[
{
"docid": "d763198d3bfb1d30b153e13245c90c08",
"text": "Inspired by the aerial maneuvering ability of lizards, we present the design and control of MSU (Michigan State University) tailbot - a miniature-tailed jumping robot. The robot can not only wheel on the ground, but also jump up to overcome obstacles. Moreover, once leaping into the air, it can control its body angle using an active tail to dynamically maneuver in midair for safe landings. We derive the midair dynamics equation and design controllers, such as a sliding mode controller, to stabilize the body at desired angles. To the best of our knowledge, this is the first miniature (maximum size 7.5 cm) and lightweight (26.5 g) robot that can wheel on the ground, jump to overcome obstacles, and maneuver in midair. Furthermore, tailbot is equipped with on-board energy, sensing, control, and wireless communication capabilities, enabling tetherless or autonomous operations. The robot in this paper exemplifies the integration of mechanical design, embedded system, and advanced control methods that will inspire the next-generation agile robots mimicking their biological counterparts. Moreover, it can serve as mobile sensor platforms for wireless sensor networks with many field applications.",
"title": ""
},
{
"docid": "1d73817f8b1b54a82308106ee526a62b",
"text": "To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of markers and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction and extension to multi-view reconstruction. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org.",
"title": ""
},
{
"docid": "2cde7564c83fe2b75135550cb4847af0",
"text": "The twenty-first century global population will be increasingly urban-focusing the sustainability challenge on cities and raising new challenges to address urban resilience capacity. Landscape ecologists are poised to contribute to this challenge in a transdisciplinary mode in which science and research are integrated with planning policies and design applications. Five strategies to build resilience capacity and transdisciplinary collaboration are proposed: biodiversity; urban ecological networks and connectivity; multifunctionality; redundancy and modularization, adaptive design. Key research questions for landscape ecologists, planners and designers are posed to advance the development of knowledge in an adaptive mode.",
"title": ""
},
{
"docid": "238ae411572961116e47b7f6ebce974c",
"text": "Coercing new programmers to adopt disciplined development practices such as thorough unit testing is a challenging endeavor. Test-driven development (TDD) has been proposed as a solution to improve both software design and testing. Test-driven learning (TDL) has been proposed as a pedagogical approach for teaching TDD without imposing significant additional instruction time.\n This research evaluates the effects of students using a test-first (TDD) versus test-last approach in early programming courses, and considers the use of TDL on a limited basis in CS1 and CS2. Software testing, programmer productivity, programmer performance, and programmer opinions are compared between test-first and test-last programming groups. Results from this research indicate that a test-first approach can increase student testing and programmer performance, but that early programmers are very reluctant to adopt a test-first approach, even after having positive experiences using TDD. Further, this research demonstrates that TDL can be applied in CS1/2, but suggests that a more pervasive implementation of TDL may be necessary to motivate and establish disciplined testing practice among early programmers.",
"title": ""
},
{
"docid": "b8b1c342a2978f74acd38bed493a77a5",
"text": "With the rapid growth of battery-powered portable electronics, an efficient power management solution is necessary for extending battery life. Generally, basic switching regulators, such as buck and boost converters, may not be capable of using the entire battery output voltage range (e.g., 2.5-4.7 V for Li-ion batteries) to provide a fixed output voltage (e.g., 3.3 V). In this paper, an average-current-mode noninverting buck-boost dc-dc converter is proposed. It is not only able to use the full output voltage range of a Li-ion battery, but it also features high power efficiency and excellent noise immunity. The die area of this chip is 2.14 × 1.92 mm2, fabricated by using TSMC 0.35 μm 2P4M 3.3 V/5 V mixed-signal polycide process. The input voltage of the converter may range from 2.3 to 5 V with its output voltage set to 3.3 V, and its switching frequency is 500 kHz. Moreover, it can provide up to 400-mA load current, and the maximal measured efficiency is 92.01%.",
"title": ""
},
{
"docid": "e86247471d4911cb84aa79911547045b",
"text": "Creating rich representations of environments requires integration of multiple sensing modalities with complementary characteristics such as range and imaging sensors. To precisely combine multisensory information, the rigid transformation between different sensor coordinate systems (i.e., extrinsic parameters) must be estimated. The majority of existing extrinsic calibration techniques require one or multiple planar calibration patterns (such as checkerboards) to be observed simultaneously from the range and imaging sensors. The main limitation of these approaches is that they require modifying the scene with artificial targets. In this paper, we present a novel algorithm for extrinsically calibrating a range sensor with respect to an image sensor with no requirement of external artificial targets. The proposed method exploits natural linear features in the scene to precisely determine the rigid transformation between the coordinate frames. First, a set of 3D lines (plane intersection and boundary line segments) are extracted from the point cloud, and a set of 2D line segments are extracted from the image. Correspondences between the 3D and 2D line segments are used as inputs to an optimization problem which requires jointly estimating the relative translation and rotation between the coordinate frames. The proposed method is not limited to any particular types or configurations of sensors. To demonstrate robustness, efficiency and generality of the presented algorithm, we include results using various sensor configurations.",
"title": ""
},
{
"docid": "06b0708250515510b8a3fc302045fe4b",
"text": "While the subject of cyberbullying of children and adolescents has begun to be addressed, less attention and research have focused on cyberbullying in the workplace. Male-dominated workplaces such as manufacturing settings are found to have an increased risk of workplace bullying, but the prevalence of cyberbullying in this sector is not known. This exploratory study investigated the prevalence and methods of face-to-face bullying and cyberbullying of males at work. One hundred three surveys (a modified version of the revised Negative Acts Questionnaire [NAQ-R]) were returned from randomly selected members of the Australian Manufacturing Workers' Union (AMWU). The results showed that 34% of respondents were bullied face-to-face, and 10.7% were cyberbullied. All victims of cyberbullying also experienced face-to-face bullying. The implications for organizations' \"duty of care\" in regard to this new form of bullying are indicated.",
"title": ""
},
{
"docid": "86a49d9f751cab880b7a4f8f516e9183",
"text": "1. THE GALLERY This paper describes the design and implementation of a virtual art gallery on the Web. The main objective of the virtual art gallery is to provide a navigational environment for the Networked Digital Library of Theses and Dissertations (NDLTD see http://www.ndltd.org and Fox et al., 1996), where users can browse and search electronic theses and dissertations (ETDs) indexed by the pictures--images and figures--that the ETDs contain. The galleries in the system and the ETD collection are simultaneously organized using a college/department taxonomic hierarchy. The system architecture has the general characteristics of the client-server model. Users access the system by using a Web browser to follow a link to one of the named art galleries. This link has the form of a CGI query which is passed by the Web server to a CGI program that handles it as a data request by accessing the database server. The query results are used to generate a view of the art gallery (as a VRML or HTML file) and are sent back to the user’s browser. We employ several technologies for improving the system’s efficiency and flexibility. We define a Simple Template Language (STL) to specify the format of VRML or HTML templates. Documents are generated by merging pre-designed VRML or HTML files and the data coming from the Database. Therefore, the generated virtual galleries always contain the most recent database information. We also define a Data Service Protocol (DSP), which normalizes the request from the user to communicate it with the Data Server and ultimately the image database.",
"title": ""
},
{
"docid": "06b0ad9465d3e83723db53cdca19167a",
"text": "SUMMARY\nSOAP2 is a significantly improved version of the short oligonucleotide alignment program that both reduces computer memory usage and increases alignment speed at an unprecedented rate. We used a Burrows Wheeler Transformation (BWT) compression index to substitute the seed strategy for indexing the reference sequence in the main memory. We tested it on the whole human genome and found that this new algorithm reduced memory usage from 14.7 to 5.4 GB and improved alignment speed by 20-30 times. SOAP2 is compatible with both single- and paired-end reads. Additionally, this tool now supports multiple text and compressed file formats. A consensus builder has also been developed for consensus assembly and SNP detection from alignment of short reads on a reference genome.\n\n\nAVAILABILITY\nhttp://soap.genomics.org.cn.",
"title": ""
},
{
"docid": "aa58cb2b2621da6260aeb203af1bd6f1",
"text": "Aspect-based opinion mining from online reviews has attracted a lot of attention recently. The main goal of all of the proposed methods is extracting aspects and/or estimating aspect ratings. Recent works, which are often based on Latent Dirichlet Allocation (LDA), consider both tasks simultaneously. These models are normally trained at the item level, i.e., a model is learned for each item separately. Learning a model per item is fine when the item has been reviewed extensively and has enough training data. However, in real-life data sets such as those from Epinions.com and Amazon.com more than 90% of items have less than 10 reviews, so-called cold start items. State-of-the-art LDA models for aspect-based opinion mining are trained at the item level and therefore perform poorly for cold start items due to the lack of sufficient training data. In this paper, we propose a probabilistic graphical model based on LDA, called Factorized LDA (FLDA), to address the cold start problem. The underlying assumption of FLDA is that aspects and ratings of a review are influenced not only by the item but also by the reviewer. It further assumes that both items and reviewers can be modeled by a set of latent factors which represent their aspect and rating distributions. Different from state-of-the-art LDA models, FLDA is trained at the category level and learns the latent factors using the reviews of all the items of a category, in particular the non cold start items, and uses them as prior for cold start items. Our experiments on three real-life data sets demonstrate the improved effectiveness of the FLDA model in terms of likelihood of the held-out test set. We also evaluate the accuracy of FLDA based on two application-oriented measures.",
"title": ""
},
{
"docid": "c002eab17c87343b5d138b34e3be73f3",
"text": "Finding semantically rich and computer-understandable representations for textual dialogues, utterances and words is crucial for dialogue systems (or conversational agents), as their performance mostly depends on understanding the context of conversations. In recent research approaches, responses have been generated utilizing a decoder architecture, given the distributed vector representation (embedding) of the current conversation. In this paper, the utilization of embeddings for answer retrieval is explored by using Locality-Sensitive Hashing Forest (LSH Forest), an Approximate Nearest Neighbor (ANN) model, to find similar conversations in a corpus and rank possible candidates. Experimental results on the well-known Ubuntu Corpus (in English) and a customer service chat dataset (in Dutch) show that, in combination with a candidate selection method, retrieval-based approaches outperform generative ones and reveal promising future research directions towards the usability of such a system.",
"title": ""
},
{
"docid": "e640e1831db44482e04005657f1a0f43",
"text": "Increasing emphasis has been placed on the use of effect size reporting in the analysis of social science data. Nonetheless, the use of effect size reporting remains inconsistent, and interpretation of effect size estimates continues to be confused. Researchers are presented with numerous effect sizes estimate options, not all of which are appropriate for every research question. Clinicians also may have little guidance in the interpretation of effect sizes relevant for clinical practice. The current article provides a primer of effect size estimates for the social sciences. Common effect sizes estimates, their use, and interpretations are presented as a guide for researchers.",
"title": ""
},
{
"docid": "2547be3cb052064fd1f995c45191e84d",
"text": "With the new generation of full low floor passenger trains, the constraints of weight and size on the traction transformer are becoming stronger. The ultimate target weight for the transformer is 1 kg/kVA. The reliability and the efficiency are also becoming more important. To address these issues, a multilevel topology using medium frequency transformers has been developed. It permits to reduce the weight and the size of the system and improves the global life cycle cost of the vehicle. The proposed multilevel converter consists of sixteen bidirectional direct current converters (cycloconverters) connected in series to the catenary 15 kV, 16.7 Hz through a choke inductor. The cycloconverters are connected to sixteen medium frequency transformers (400 Hz) that are fed by sixteen four-quadrant converters connected in parallel to a 1.8 kV DC link with a 2f filter. The control, the command and the hardware of the prototype are described in detail.",
"title": ""
},
{
"docid": "0bba0afb68f80afad03d0ba3d1ce9c89",
"text": "The Luneburg lens is an aberration-free lens that focuses light from all directions equally well. We fabricated and tested a Luneburg lens in silicon photonics. Such fully-integrated lenses may become the building blocks of compact Fourier optics on chips. Furthermore, our fabrication technique is sufficiently versatile for making perfect imaging devices on silicon platforms.",
"title": ""
},
{
"docid": "d3afe3be6debe665f442367b17fa4e28",
"text": "It is common practice for developers of user-facing software to transform a mock-up of a graphical user interface (GUI) into code. This process takes place both at an application’s inception and in an evolutionary context as GUI changes keep pace with evolving features. Unfortunately, this practice is challenging and time-consuming. In this paper, we present an approach that automates this process by enabling accurate prototyping of GUIs via three tasks: detection, classification, and assembly. First, logical components of a GUI are detected from a mock-up artifact using either computer vision techniques or mock-up metadata. Then, software repository mining, automated dynamic analysis, and deep convolutional neural networks are utilized to accurately classify GUI-components into domain-specific types (e.g., toggle-button). Finally, a data-driven, K-nearest-neighbors algorithm generates a suitable hierarchical GUI structure from which a prototype application can be automatically assembled. We implemented this approach for Android in a system called REDRAW. Our evaluation illustrates that REDRAW achieves an average GUI-component classification accuracy of 91% and assembles prototype applications that closely mirror target mock-ups in terms of visual affinity while exhibiting reasonable code structure. Interviews with industrial practitioners illustrate ReDraw’s potential to improve real development workflows.",
"title": ""
},
{
"docid": "a5c054899abf8aa553da4a576577678e",
"text": "Developmental programming resulting from maternal malnutrition can lead to an increased risk of metabolic disorders such as obesity, insulin resistance, type 2 diabetes and cardiovascular disorders in the offspring in later life. Furthermore, many conditions linked with developmental programming are also known to be associated with the aging process. This review summarizes the available evidence about the molecular mechanisms underlying these effects, with the potential to identify novel areas of therapeutic intervention. This could also lead to the discovery of new treatment options for improved patient outcomes.",
"title": ""
},
{
"docid": "aae5e70b47a7ec720333984e80725034",
"text": "The authors realize a 50% length reduction of short-slot couplers in a post-wall dielectric substrate by two techniques. One is to introduce hollow rectangular holes near the side walls of the coupled region. The difference of phase constant between the TE10 and TE20 propagating modes increases and the required length to realize a desired dividing ratio is reduced. Another is to remove two reflection-suppressing posts in the coupled region. The length of the coupled region is determined to cancel the reflections at both ends of the coupled region. The total length of a 4-way Butler matrix can be reduced to 48% in comparison with the conventional one and the couplers still maintain good dividing characteristics; the dividing ratio of the hybrid is less than 0.1 dB and the isolations of the couplers are more than 20 dB. key words: short-slot coupler, length reduction, Butler matrix, post-wall waveguide, dielectric substrate, rectangular hole",
"title": ""
},
{
"docid": "4ede3f2caa829e60e4f87a9b516e28bd",
"text": "This report describes the difficulties of training neural networks and in particular deep neural networks. It then provides a literature review of training methods for deep neural networks, with a focus on pre-training. It focuses on Deep Belief Networks composed of Restricted Boltzmann Machines and Stacked Autoencoders and provides an outreach on further and alternative approaches. It also includes related practical recommendations from the literature on training them. In the second part, initial experiments using some of the covered methods are performed on two databases. In particular, experiments are performed on the MNIST hand-written digit dataset and on facial emotion data from a Kaggle competition. The results are discussed in the context of results reported in other research papers. An error rate lower than the best contribution to the Kaggle competition is achieved using an optimized Stacked Autoencoder.",
"title": ""
},
{
"docid": "17ed907c630ec22cbbb5c19b5971238d",
"text": "The fastest tools for network reachability queries use adhoc algorithms to compute all packets from a source S that can reach a destination D. This paper examines whether network reachability can be solved efficiently using existing verification tools. While most verification tools only compute reachability (“Can S reach D?”), we efficiently generalize them to compute all reachable packets. Using new and old benchmarks, we compare model checkers, SAT solvers and various Datalog implementations. The only existing verification method that worked competitively on all benchmarks in seconds was Datalog with a new composite Filter-Project operator and a Difference of Cubes representation. While Datalog is slightly slower than the Hassel C tool, it is far more flexible. We also present new results that more precisely characterize the computational complexity of network verification. This paper also provides a gentle introduction to program verification for the networking community.",
"title": ""
},
{
"docid": "eb2663865d0d7312641e0748978b238c",
"text": "Recurrent Neural Networks (RNNs) are powerful models for sequential data that have the potential to learn long-term dependencies. However, they are computationally expensive to train and difficult to parallelize. Recent work has shown that normalizing intermediate representations of neural networks can significantly improve convergence rates in feed-forward neural networks [1]. In particular, batch normalization, which uses mini-batch statistics to standardize features, was shown to significantly reduce training time. In this paper, we investigate how batch normalization can be applied to RNNs. We show for both a speech recognition task and language modeling that the way we apply batch normalization leads to a faster convergence of the training criterion but doesn't seem to improve the generalization performance.",
"title": ""
}
] |
scidocsrr
|
a521c595f1f154661d9c236149a82c09
|
Evaluation of the ARTutor augmented reality educational platform in tertiary education
|
[
{
"docid": "1cf3ee00f638ca44a3b9772a2df60585",
"text": "Navigation has been a popular area of research in both academia and industry. Combined with maps, and different localization technologies, navigation systems have become robust and more usable. By combining navigation with augmented reality, it can be improved further to become realistic and user friendly. This paper surveys existing researches carried out in this area, describes existing techniques for building augmented reality navigation systems, and the problems faced.",
"title": ""
}
] |
[
{
"docid": "4a9174770b8ca962e8135fbda0e41425",
"text": "is published by Princeton University Press and copyrighted, © 2006, by Princeton University Press. All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher, except for reading and browsing via the World Wide Web. Users are not permitted to mount this file on any network servers.",
"title": ""
},
{
"docid": "42db53797dc57cfdb7f963c55bb7f039",
"text": "Vast amounts of artistic data is scattered on-line from both museums and art applications. Collecting, processing and studying it with respect to all accompanying attributes is an expensive process. With a motivation to speed up and improve the quality of categorical analysis in the artistic domain, in this paper we propose an efficient and accurate method for multi-task learning with a shared representation applied in the artistic domain. We continue to show how different multi-task configurations of our method behave on artistic data and outperform handcrafted feature approaches as well as convolutional neural networks. In addition to the method and analysis, we propose a challenge like nature to the new aggregated data set with almost half a million samples and structuredmeta-data to encourage further research and societal engagement. ACM Reference format: Gjorgji Strezoski and Marcel Worring. 2017. OmniArt: Multi-task Deep Learning for Artistic Data Analysis.",
"title": ""
},
{
"docid": "a5e8a45d3de85acfd4192a0d096be7c4",
"text": "Fog computing, as a promising technique, is with huge advantages in dealing with large amounts of data and information with low latency and high security. We introduce a promising multiple access technique entitled nonorthogonal multiple access to provide communication service between the fog layer and the Internet of Things (IoT) device layer in fog computing, and propose a dynamic cooperative framework containing two stages. At the first stage, dynamic IoT device clustering is solved to reduce the system complexity and the delay for the IoT devices with better channel conditions. At the second stage, power allocation based energy management is solved using Nash bargaining solution in each cluster to ensure fairness among IoT devices. Simulation results reveal that our proposed scheme can simultaneously achieve higher spectrum efficiency and ensure fairness among IoT devices compared to other schemes.",
"title": ""
},
{
"docid": "1fcff5c3a30886780734bf9765b629af",
"text": "Automatic video captioning is challenging due to the complex interactions in dynamic real scenes. A comprehensive system would ultimately localize and track the objects, actions and interactions present in a video and generate a description that relies on temporal localization in order to ground the visual concepts. However, most existing automatic video captioning systems map from raw video data to high level textual description, bypassing localization and recognition, thus discarding potentially valuable information for content localization and generalization. In this work we present an automatic video captioning model that combines spatio-temporal attention and image classification by means of deep neural network structures based on long short-term memory. The resulting system is demonstrated to produce state-of-the-art results in the standard YouTube captioning benchmark while also offering the advantage of localizing the visual concepts (subjects, verbs, objects), with no grounding supervision, over space and time.",
"title": ""
},
{
"docid": "c42d1ee7a6b947e94eeb6c772e2b638f",
"text": "As mobile devices are equipped with more memory and computational capability, a novel peer-to-peer communication model for mobile cloud computing is proposed to interconnect nearby mobile devices through various short range radio communication technologies to form mobile cloudlets, where every mobile device works as either a computational service provider or a client of a service requester. Though this kind of computation offloading benefits compute-intensive applications, the corresponding service models and analytics tools are remaining open issues. In this paper we categorize computation offloading into three modes: remote cloud service mode, connected ad hoc cloudlet service mode, and opportunistic ad hoc cloudlet service mode. We also conduct a detailed analytic study for the proposed three modes of computation offloading at ad hoc cloudlet.",
"title": ""
},
{
"docid": "12b3918c438f3bed74d85cf643f613c9",
"text": "Knowledge extraction from unstructured data is a challenging research problem in research domain of Natural Language Processing (NLP). It requires complex NLP tasks like entity extraction and Information Extraction (IE), but one of the most challenging tasks is to extract all the required entities of data in the form of structured format so that data analysis can be applied. Our focus is to explain how the data is extracted in the form of datasets or conventional database so that further text and data analysis can be carried out. This paper presents a framework for Hadith data extraction from the Hadith authentic sources. Hadith is the collection of sayings of Holy Prophet Muhammad, who is the last holy prophet according to Islamic teachings. This paper discusses the preparation of the dataset repository and highlights issues in the relevant research domain. The research problem and their solutions of data extraction, preprocessing and data analysis are elaborated. The results have been evaluated using the standard performance evaluation measures. The dataset is available in multiple languages, multiple formats and is available free of cost for research purposes. Keywords—Data extraction; preprocessing; regex; Hadith; text analysis; parsing",
"title": ""
},
{
"docid": "7a72f69ad4926798e12f6fa8e598d206",
"text": "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed ‘DeepLabv3’ system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.",
"title": ""
},
{
"docid": "88afb98c0406d7c711b112fbe2a6f25e",
"text": "This paper provides a new metric, knowledge management performance index (KMPI), for assessing the performance of a firm in its knowledge management (KM) at a point in time. Firms are assumed to have always been oriented toward accumulating and applying knowledge to create economic value and competitive advantage. We therefore suggest the need for a KMPI which we have defined as a logistic function having five components that can be used to determine the knowledge circulation process (KCP): knowledge creation, knowledge accumulation, knowledge sharing, knowledge utilization, and knowledge internalization. When KCP efficiency increases, KMPI will also expand, enabling firms to become knowledgeintensive. To prove KMPI’s contribution, a questionnaire survey was conducted on 101 firms listed in the KOSDAQ market in Korea. We associated KMPI with three financial measures: stock price, price earnings ratio (PER), and R&D expenditure. Statistical results show that the proposed KMPI can represent KCP efficiency, while the three financial performance measures are also useful. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2d0cc4c7ca6272200bb1ed1c9bba45f0",
"text": "Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS.",
"title": ""
},
{
"docid": "ae9bdb80a60dd6820c1c9d9557a73ffc",
"text": "We propose a novel method for predicting image labels by fusing image content descriptors with the social media context of each image. An image uploaded to a social media site such as Flickr often has meaningful, associated information, such as comments and other images the user has uploaded, that is complementary to pixel content and helpful in predicting labels. Prediction challenges such as ImageNet [6]and MSCOCO [19] use only pixels, while other methods make predictions purely from social media context [21]. Our method is based on a novel fully connected Conditional Random Field (CRF) framework, where each node is an image, and consists of two deep Convolutional Neural Networks (CNN) and one Recurrent Neural Network (RNN) that model both textual and visual node/image information. The edge weights of the CRF graph represent textual similarity and link-based metadata such as user sets and image groups. We model the CRF as an RNN for both learning and inference, and incorporate the weighted ranking loss and cross entropy loss into the CRF parameter optimization to handle the training data imbalance issue. Our proposed approach is evaluated on the MIR-9K dataset and experimentally outperforms current state-of-the-art approaches.",
"title": ""
},
{
"docid": "bd0b0cef8ef780a44ad92258ac705395",
"text": "This chapter introduces some of the theoretical foundations of swarm intelligence. We focus on the design and implementation of the Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) algorithms for various types of function optimization problems, real world applications and data mining. Results are analyzed, discussed and their potentials are illustrated.",
"title": ""
},
{
"docid": "9185a7823e699c758dde3a81f7d6d86d",
"text": "Reading text from photographs is a challenging problem that has received a significant amount of attention. Two key components of most systems are (i) text detection from images and (ii) character recognition, and many recent methods have been proposed to design better feature representations and models for both. In this paper, we apply methods recently developed in machine learning -- specifically, large-scale algorithms for learning the features automatically from unlabeled data -- and show that they allow us to construct highly effective classifiers for both detection and recognition to be used in a high accuracy end-to-end system.",
"title": ""
},
{
"docid": "ca1c193e5e5af821772a5d123e84b72a",
"text": "Over the last few years, the phenomenon of adversarial examples — maliciously constructed inputs that fool trained machine learning models — has captured the attention of the research community, especially when the adversary is restricted to small modifications of a correctly handled input. Less surprisingly, image classifiers also lack human-level performance on randomly corrupted images, such as images with additive Gaussian noise. In this paper we provide both empirical and theoretical evidence that these are two manifestations of the same underlying phenomenon, establishing close connections between the adversarial robustness and corruption robustness research programs. This suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions. Based on our results we recommend that future adversarial defenses consider evaluating the robustness of their methods to distributional shift with benchmarks such as Imagenet-C.",
"title": ""
},
{
"docid": "c13aff70c3b080cfd5d374639e5ec0e9",
"text": "Contemporary vehicles are getting equipped with an increasing number of Electronic Control Units (ECUs) and wireless connectivities. Although these have enhanced vehicle safety and efficiency, they are accompanied with new vulnerabilities. In this paper, we unveil a new important vulnerability applicable to several in-vehicle networks including Control Area Network (CAN), the de facto standard in-vehicle network protocol. Specifically, we propose a new type of Denial-of-Service (DoS), called the bus-off attack, which exploits the error-handling scheme of in-vehicle networks to disconnect or shut down good/uncompromised ECUs. This is an important attack that must be thwarted, since the attack, once an ECU is compromised, is easy to be mounted on safety-critical ECUs while its prevention is very difficult. In addition to the discovery of this new vulnerability, we analyze its feasibility using actual in-vehicle network traffic, and demonstrate the attack on a CAN bus prototype as well as on two real vehicles. Based on our analysis and experimental results, we also propose and evaluate a mechanism to detect and prevent the bus-off attack.",
"title": ""
},
{
"docid": "f70949a29dbac974a7559684d1da3ebb",
"text": "Cybercrime is all about the crimes in which communication channel and communication device has been used directly or indirectly as a medium whether it is a Laptop, Desktop, PDA, Mobile phones, Watches, Vehicles. The report titled “Global Risks for 2012”, predicts cyber-attacks as one of the top five risks in the World for Government and business sector. Cyber crime is a crime which is harder to detect and hardest to stop once occurred causing a long term negative impact on victims. With the increasing popularity of online banking, online shopping which requires sensitive personal and financial data, it is a term that we hear in the news with some frequency. Now, in order to protect ourselves from this crime we need to know what it is and how it does works against us. This paper presents a brief overview of all about cyber criminals and crime with its evolution, types, case study, preventive majors and the department working to combat this crime.",
"title": ""
},
{
"docid": "807cd6adc45a2adb7943c5a0fb5baa94",
"text": "Reliable performance evaluations require the use of representative workloads. This is no easy task because modern computer systems and their workloads are complex, with many interrelated attributes and complicated structures. Experts often use sophisticated mathematics to analyze and describe workload models, making these models difficult for practitioners to grasp. This book aims to close this gap by emphasizing the intuition and the reasoning behind the definitions and derivations related to the workload models. It provides numerous examples from real production systems, with hundreds of graphs. Using this book, readers will be able to analyze collected workload data and clean it if necessary, derive statistical models that include skewed marginal distributions and correlations, and consider the need for generative models and feedback from the system. The descriptive statistics techniques covered are also useful for other domains.",
"title": ""
},
{
"docid": "edf8d1bb84c0845dddad417a939e343b",
"text": "Suicides committed by intraorally placed firecrackers are rare events. Given to the use of more powerful components such as flash powder recently, some firecrackers may cause massive life-threatening injuries in case of such misuse. Innocuous black powder firecrackers are subject to national explosives legislation and only have the potential to cause harmless injuries restricted to the soft tissue. We here report two cases of suicide committed by an intraoral placement of firecrackers, resulting in similar patterns of skull injury. As it was first unknown whether black powder firecrackers can potentially cause serious skull injury, we compared the potential of destruction using black powder and flash powder firecrackers in a standardized skull simulant model (Synbone, Malans, Switzerland). This was the first experiment to date simulating the impacts resulting from an intraoral burst in a skull simulant model. The intraoral burst of a “D-Böller” (an example of one of the most powerful black powder firecrackers in Germany) did not lead to any injuries of the osseous skull. In contrast, the “La Bomba” (an example of the weakest known flash powder firecrackers) caused complex fractures of both the viscero- and neurocranium. The results obtained from this experimental study indicate that black powder firecrackers are less likely to cause severe injuries as a consequence of intraoral explosions, whereas flash powder-based crackers may lead to massive life-threatening craniofacial destructions and potentially death.",
"title": ""
},
{
"docid": "9709064022fd1ab5ef145ce58d2841c1",
"text": "Enforcing security in Internet of Things environments has been identified as one of the top barriers for realizing the vision of smart, energy-efficient homes and buildings. In this context, understanding the risks related to the use and potential misuse of information about homes, partners, and end-users, as well as, forming methods for integrating security-enhancing measures in the design is not straightforward and thus requires substantial investigation. A risk analysis applied on a smart home automation system developed in a research project involving leading industrial actors has been conducted. Out of 32 examined risks, 9 were classified as low and 4 as high, i.e., most of the identified risks were deemed as moderate. The risks classified as high were either related to the human factor or to the software components of the system. The results indicate that with the implementation of standard security features, new, as well as, current risks can be minimized to acceptable levels albeit that the most serious risks, i.e., those derived from the human factor, need more careful consideration, as they are inherently complex to handle. A discussion of the implications of the risk analysis results points to the need for a more general model of security and privacy included in the design phase of smart homes. With such a model of security and privacy in design in place, it will contribute to enforcing system security and enhancing user privacy in smart homes, and thus helping to further realize the potential in such IoT environments.",
"title": ""
},
{
"docid": "a76d5685b383e45778417d5eccdd8b6c",
"text": "The advent of both Cloud computing and Internet of Things (IoT) is changing the way of conceiving information and communication systems. Generally, we talk about IoT Cloud to indicate a new type of distributed system consisting of a set of smart devices interconnected with a remote Cloud infrastructure, platform, or software through the Internet and able to provide IoT as a Service (IoTaaS). In this paper, we discuss the near future evolution of IoT Clouds towards federated ecosystems, where IoT providers cooperate to offer more flexible services. Moreover, we present a general three-layer IoT Cloud Federation architecture, highlighting new business opportunities and challenges.",
"title": ""
},
{
"docid": "67509b64aaf1ead0bcba557d8cfe84bc",
"text": "Base on innovation resistance theory, this research builds the model of factors affecting consumers' resistance in using online travel in Thailand. Through the questionnaires and the SEM methods, empirical analysis results show that functional barriers are even greater sources of resistance to online travel website than psychological barriers. Online experience and independent travel experience have significantly influenced on consumer innovation resistance. Social influence plays an important role in this research.",
"title": ""
}
] |
scidocsrr
|
44c45c53021be2a5091a012f9299fe3c
|
First Step Towards End-to-End Parametric TTS Synthesis: Generating Spectral Parameters with Neural Attention
|
[
{
"docid": "280c39aea4584e6f722607df68ee28dc",
"text": "Statistical parametric speech synthesis (SPSS) using deep neural networks (DNNs) has shown its potential to produce naturally-sounding synthesized speech. However, there are limitations in the current implementation of DNN-based acoustic modeling for speech synthesis, such as the unimodal nature of its objective function and its lack of ability to predict variances. To address these limitations, this paper investigates the use of a mixture density output layer. It can estimate full probability density functions over real-valued output features conditioned on the corresponding input features. Experimental results in objective and subjective evaluations show that the use of the mixture density output layer improves the prediction accuracy of acoustic features and the naturalness of the synthesized speech.",
"title": ""
}
] |
[
{
"docid": "b4f1559a05ded4584ef06f2d660053ac",
"text": "A circularly polarized microstrip array antenna is proposed for Ka-band satellite applications. The antenna element consists of an L-shaped patch with parasitic circular-ring radiator. A sequentially rotated 2×2 antenna array exhibits a wideband 3-dB axial ratio bandwidth of 20.6% (25.75 GHz - 31.75 GHz) and 2;1-VSWR bandwidth of 24.0% (25.5 GHz - 32.5 GHz). A boresight gain of 10-11.8 dBic is achieved across a frequency range from 26 GHz to 32 GHz. An 8×8 antenna array exhibits a boresight gain of greater than 24 dBic over 27.25 GHz-31.25 GHz.",
"title": ""
},
{
"docid": "c601040737c42abcef996e027fabc8cf",
"text": "This article assumes that brands should be managed as valuable, long-term corporate assets. It is proposed that for a true brand asset mindset to be achieved, the relationship between brand loyalty and brand value needs to be recognised within the management accounting system. It is also suggested that strategic brand management is achieved by having a multi-disciplinary focus, which is facilitated by a common vocabulary. This article seeks to establish the relationships between the constructs and concepts of branding, and to provide a framework and vocabulary that aids effective communication between the functions of accounting and marketing. Performance measures for brand management are also considered, and a model for the management of brand equity is provided. Very simply, brand description (or identity or image) is tailored to the needs and wants of a target market using the marketing mix of product, price, place, and promotion. The success or otherwise of this process determines brand strength or the degree of brand loyalty. A brand's value is determined by the degree of brand loyalty, as this implies a guarantee of future cash flows. Feldwick considered that using the term brand equity creates the illusion that an operational relationship exists between brand description, brand strength and brand value that cannot be demonstrated to operate in practice. This is not surprising, given that brand description and brand strength are, broadly speaking, within the remit of marketers and brand value has been considered largely an accounting issue. However, for brands to be managed strategically as long-term assets, the relationship outlined in Figure 1 needs to be operational within the management accounting system. The efforts of managers of brands could be reviewed and assessed by the measurement of brand strength and brand value, and brand strategy modified accordingly. Whilst not a simple process, the measurement of outcomes is useful as part of a range of diagnostic tools for management. This is further explored in the summary discussion. Whilst there remains a diversity of opinion on the definition and basis of brand equity, most approaches consider brand equity to be a strategic issue, albeit often implicitly. The following discussion explores the range of interpretations of brand equity, showing how they relate to Feldwick's (1996) classification. Ambler and Styles (1996) suggest that managers of brands choose between taking profits today or storing them for the future, with brand equity being the `̀ . . . store of profits to be realised at a later date.'' Their definition follows Srivastava and Shocker (1991) with brand equity suggested as; . . . the aggregation of all accumulated attitudes and behavior patterns in the extended minds of consumers, distribution channels and influence agents, which will enhance future profits and long term cash flow. This definition of brand equity distinguishes the brand asset from its valuation, and falls into Feldwick's (1996) brand strength category of brand equity. This approach is intrinsically strategic in nature, with the emphasis away from short-term profits. Davis (1995) also emphasises the strategic importance of brand equity when he defines brand value (one form of brand equity) as `̀ . . . the potential strategic contributions and benefits that a brand can make to a company.'' In this definition, brand value is the resultant form of brand equity in Figure 1, or the outcome of consumer-based brand equity. Keller (1993) also takes the consumer-based brand strength approach to brand equity, suggesting that brand equity represents a condition in which the consumer is familiar with the brand and recalls some favourable, strong and unique brand associations. Hence, there is a differential effect of brand knowledge on consumer response to the marketing of a brand. This approach is aligned to the relationship described in Figure 1, where brand strength is a function of brand description. Winters (1991) relates brand equity to added value by suggesting that brand equity involves the value added to a product by consumers' associations and perceptions of a particular brand name. It is unclear in what way added value is being used, but brand equity fits the categories of brand description and brand strength as outlined above. Leuthesser (1988) offers a broad definition of brand equity as: the set of associations and behaviour on the part of a brand's customers, channel members and parent corporation that permits the brand to earn greater volume or greater margins than it could without the brand name. This definition covers Feldwick's classifications of brand description and brand strength implying a similar relationship to that outlined in Figure 1. The key difference to Figure 1 is that the outcome of brand strength is not specified as brand value, but implies market share, and profit as outcomes. Marketers tend to describe, rather than ascribe a figure to, the outcomes of brand strength. Pitta and Katsanis (1995) suggest that brand equity increases the probability of brand choice, leads to brand loyalty and `̀ insulates the brand from a measure of competitive threats.'' Aaker (1991) suggests that strong brands will usually provide higher profit margins and better access to distribution channels, as well as providing a broad platform for product line extensions. Brand extension[1] is a commonly cited advantage of high brand equity, with Dacin and Smith (1994) and Keller and Aaker (1992) suggesting that successful brand extensions can also build brand equity. Loken and John (1993) and Aaker (1993) advise caution in that poor brand extensions can erode brand equity. Figure 1 The brand equity chain [ 663 ] Lisa Wood Brands and brand equity: definition and management Management Decision 38/9 [2000] 662±669 Farquhar (1989) suggests a relationship between high brand equity and market power asserting that: The competitive advantage of firms that have brands with high equity includes the opportunity for successful extensions, resilience against competitors' promotional pressures, and creation of barriers to competitive entry. This relationship is summarised in Figure 2. Figure 2 indicates that there can be more than one outcome determined by brand strength apart from brand value. It should be noted that it is argued by Wood (1999) that brand value measurements could be used as an indicator of market power. Achieving a high degree of brand strength may be considered an important objective for managers of brands. If we accept that the relationships highlighted in Figures 1 and 2 are something that we should be aiming for, then it is logical to focus our attention on optimising brand description. This requires a rich understanding of the brand construct itself. Yet, despite an abundance of literature, the definitive brand construct has yet to be produced. Subsequent discussion explores the brand construct itself, and highlights the specific relationship between brands and added value. This relationship is considered to be key to the variety of approaches to brand definition within marketing, and is currently an area of incompatibility between marketing and accounting.",
"title": ""
},
{
"docid": "81c02e708a21532d972aca0b0afd8bb5",
"text": "We propose a new tree-based ORAM scheme called Circuit ORAM. Circuit ORAM makes both theoretical and practical contributions. From a theoretical perspective, Circuit ORAM shows that the well-known Goldreich-Ostrovsky logarithmic ORAM lower bound is tight under certain parameter ranges, for several performance metrics. Therefore, we are the first to give an answer to a theoretical challenge that remained open for the past twenty-seven years. Second, Circuit ORAM earns its name because it achieves (almost) optimal circuit size both in theory and in practice for realistic choices of block sizes. We demonstrate compelling practical performance and show that Circuit ORAM is an ideal candidate for secure multi-party computation applications.",
"title": ""
},
{
"docid": "ce2139f51970bfa5bd3738392f55ea48",
"text": "A novel type of dual circular polarizer for simultaneously receiving and transmitting right-hand and left-hand circularly polarized waves is developed and tested. It consists of a H-plane T junction of rectangular waveguide, one circular waveguide as an Eplane arm located on top of the junction, and two metallic pins used for matching. The theoretical analysis and design of the three-physicalport and four-mode polarizer were researched by solving ScatteringMatrix of the network and using a full-wave electromagnetic simulation tool. The optimized polarizer has the advantages of a very compact size with a volume smaller than 0.6λ3, low complexity and manufacturing cost. A couple of the polarizer has been manufactured and tested, and the experimental results are basically consistent with the theories.",
"title": ""
},
{
"docid": "33bc830ab66c9864fd4c45c463c2c9da",
"text": "We present a novel system for sketch-based face image editing, enabling users to edit images intuitively by sketching a few strokes on a region of interest. Our interface features tools to express a desired image manipulation by providing both geometry and color constraints as user-drawn strokes. As an alternative to the direct user input, our proposed system naturally supports a copy-paste mode, which allows users to edit a given image region by using parts of another exemplar image without the need of hand-drawn sketching at all. The proposed interface runs in real-time and facilitates an interactive and iterative workflow to quickly express the intended edits. Our system is based on a novel sketch domain and a convolutional neural network trained end-to-end to automatically learn to render image regions corresponding to the input strokes. To achieve high quality and semantically consistent results we train our neural network on two simultaneous tasks, namely image completion and image translation. To the best of our knowledge, we are the first to combine these two tasks in a unified framework for interactive image editing. Our results show that the proposed sketch domain, network architecture, and training procedure generalize well to real user input and enable high quality synthesis results without additional post-processing.",
"title": ""
},
{
"docid": "d1f02e2f57cffbc17387de37506fddc9",
"text": "The task of matching patterns in graph-structured data has applications in such diverse areas as computer vision, biology, electronics, computer aided design, social networks, and intelligence analysis. Consequently, work on graph-based pattern matching spans a wide range of research communities. Due to variations in graph characteristics and application requirements, graph matching is not a single problem, but a set of related problems. This paper presents a survey of existing work on graph matching, describing variations among problems, general and specific solution approaches, evaluation techniques, and directions for further research. An emphasis is given to techniques that apply to general graphs with semantic characteristics.",
"title": ""
},
{
"docid": "18bc3abbd6a4f51fdcfbafcc280f0805",
"text": "Complex disease genetics has been revolutionised in recent years by the advent of genome-wide association (GWA) studies. The chronic inflammatory bowel diseases (IBDs), Crohn's disease and ulcerative colitis have seen notable successes culminating in the discovery of 99 published susceptibility loci/genes (71 Crohn's disease; 47 ulcerative colitis) to date. Approximately one-third of loci described confer susceptibility to both Crohn's disease and ulcerative colitis. Amongst these are multiple genes involved in IL23/Th17 signalling (IL23R, IL12B, JAK2, TYK2 and STAT3), IL10, IL1R2, REL, CARD9, NKX2.3, ICOSLG, PRDM1, SMAD3 and ORMDL3. The evolving genetic architecture of IBD has furthered our understanding of disease pathogenesis. For Crohn's disease, defective processing of intracellular bacteria has become a central theme, following gene discoveries in autophagy and innate immunity (associations with NOD2, IRGM, ATG16L1 are specific to Crohn's disease). Genetic evidence has also demonstrated the importance of barrier function to the development of ulcerative colitis (HNF4A, LAMB1, CDH1 and GNA12). However, when the data are analysed in more detail, deeper themes emerge including the shared susceptibility seen with other diseases. Many immune-mediated diseases overlap in this respect, paralleling the reported epidemiological evidence. However, in several cases the reported shared susceptibility appears at odds with the clinical picture. Examples include both type 1 and type 2 diabetes mellitus. In this review we will detail the presently available data on the genetic overlap between IBD and other diseases. The discussion will be informed by the epidemiological data in the published literature and the implications for pathogenesis and therapy will be outlined. This arena will move forwards very quickly in the next few years. Ultimately, we anticipate that these genetic insights will transform the landscape of common complex diseases such as IBD.",
"title": ""
},
{
"docid": "f1a2d243c58592c7e004770dfdd4a494",
"text": "Dynamic voltage scaling (DVS), which adjusts the clockspeed and supply voltage dynamically, is an effective techniquein reducing the energy consumption of embedded real-timesystems. The energy efficiency of a DVS algorithm largelydepends on the performance of the slack estimation methodused in it. In this paper, we propose a novel DVS algorithmfor periodic hard real-time tasks based on an improved slackestimation algorithm. Unlike the existing techniques, the proposedmethod takes full advantage of the periodic characteristicsof the real-time tasks under priority-driven schedulingsuch as EDF. Experimental results show that the proposed algorithmreduces the energy consumption by 20~40% over theexisting DVS algorithm. The experiment results also show thatour algorithm based on the improved slack estimation methodgives comparable energy savings to the DVS algorithm basedon the theoretically optimal (but impractical) slack estimationmethod.",
"title": ""
},
{
"docid": "eebeb59c737839e82ecc20a748b12c6b",
"text": "We present SWARM, a wearable affective technology designed to help a user to reflect on their own emotional state, modify their affect, and interpret the emotional states of others. SWARM aims for a universal design (inclusive of people with various disabilities), with a focus on modular actuation components to accommodate users' sensory capabilities and preferences, and a scarf form-factor meant to reduce the stigma of accessible technologies through a fashionable embodiment. Using an iterative, user-centered approach, we present SWARM's design. Additionally, we contribute findings for communicating emotions through technology actuations, wearable design techniques (including a modular soft circuit design technique that fuses conductive fabric with actuation components), and universal design considerations for wearable technology.",
"title": ""
},
{
"docid": "f05dd13991878e5395561ce24564975c",
"text": "Detecting informal settlements might be one of the most challenging tasks within urban remote sensing. This phenomenon occurs mostly in developing countries. In order to carry out the urban planning and development tasks necessary to improve living conditions for the poorest world-wide, an adequate spatial data basis is needed (see Mason, O. S. & Fraser, C. S., 1998). This can only be obtained through the analysis of remote sensing data, which represents an additional challenge from a technical point of view. Formal settlements by definition are mapped sufficiently for most purposes. However, this does not hold for informal settlements. Due to their microstructure and instability of shape, the detection of these settlements is substantially more difficult. Hence, more sophisticated data and methods of image analysis are necessary, which ideally act as a spatial data basis for a further informal settlement management. While these methods are usually quite labour-intensive, one should nonetheless bear in mind cost-effectivity of the applied methods and tools. In the present article, it will be shown how eCognition can be used to detect and discriminate informal settlements from other land-use-forms by describing typical characteristics of colour, texture, shape and context. This software is completely objectoriented and uses a patented, multi-scale image segmentation approach. The generated segments act as image objects whose physical and contextual characteristics can be described by means of fuzzy logic. The article will show methods and strategies using eCognition to detect informal settlements from high resolution space-borne image data such as IKONOS. A final discussion of the results will be given.",
"title": ""
},
{
"docid": "e2e8aac3945fa7206dee21792133b77b",
"text": "We provide an alternative to the maximum likelihood method for making inferences about the parameters of the logistic regression model. The method is based appropriate permutational distributions of sufficient statistics. It is useful for analysing small or unbalanced binary data with covariates. It also applies to small-sample clustered binary data. We illustrate the method by analysing several biomedical data sets.",
"title": ""
},
{
"docid": "14dfa311d7edf2048ebd4425ae38d3e2",
"text": "Forest species recognition has been traditionally addressed as a texture classification problem, and explored using standard texture methods such as Local Binary Patterns (LBP), Local Phase Quantization (LPQ) and Gabor Filters. Deep learning techniques have been a recent focus of research for classification problems, with state-of-the art results for object recognition and other tasks, but are not yet widely used for texture problems. This paper investigates the usage of deep learning techniques, in particular Convolutional Neural Networks (CNN), for texture classification in two forest species datasets - one with macroscopic images and another with microscopic images. Given the higher resolution images of these problems, we present a method that is able to cope with the high-resolution texture images so as to achieve high accuracy and avoid the burden of training and defining an architecture with a large number of free parameters. On the first dataset, the proposed CNN-based method achieves 95.77% of accuracy, compared to state-of-the-art of 97.77%. On the dataset of microscopic images, it achieves 97.32%, beating the best published result of 93.2%.",
"title": ""
},
{
"docid": "9343a2775b5dac7c48c1c6cec3d0a59c",
"text": "The Extended String-to-String Correction Problem [ESSCP] is defined as the problem of determining, for given strings A and B over alphabet V, a minimum-cost sequence S of edit operations such that S(A) &equil; B. The sequence S may make use of the operations: <underline>Change, Insert, Delete</underline> and <underline>Swaps</underline>, each of constant cost W<subscrpt>C</subscrpt>, W<subscrpt>I</subscrpt>, W<subscrpt>D</subscrpt>, and W<subscrpt>S</subscrpt> respectively. Swap permits any pair of adjacent characters to be interchanged.\n The principal results of this paper are:\n (1) a brief presentation of an algorithm (the CELLAR algorithm) which solves ESSCP in time Ø(¦A¦* ¦B¦* ¦V¦<supscrpt>s</supscrpt>*s), where s &equil; min(4W<subscrpt>C</subscrpt>, W<subscrpt>I</subscrpt>+W<subscrpt>D</subscrpt>)/W<subscrpt>S</subscrpt> + 1;\n (2) presentation of polynomial time algorithms for the cases (a) W<subscrpt>S</subscrpt> &equil; 0, (b) W<subscrpt>S</subscrpt> > 0, W<subscrpt>C</subscrpt>&equil; W<subscrpt>I</subscrpt>&equil; W<subscrpt>D</subscrpt>&equil; @@@@;\n (3) proof that ESSCP, with W<subscrpt>I</subscrpt> < W<subscrpt>C</subscrpt> &equil; W<subscrpt>D</subscrpt> &equil; @@@@, 0 < W<subscrpt>S</subscrpt> < @@@@, suitably encoded, is NP-complete. (The remaining case, W<subscrpt>S</subscrpt>&equil; @@@@, reduces ESSCP to the string-to-string correction problem of [1], where an Ø( ¦A¦* ¦B¦) algorithm is given.) Thus, “almost all” ESSCP's can be solved in deterministic polynomial time, but the general problem is NP-complete.",
"title": ""
},
{
"docid": "5d6bd34fb5fdb44950ec5d98e77219c3",
"text": "This paper describes an experimental setup and results of user tests focusing on the perception of temporal characteristics of vibration of a mobile device. The experiment consisted of six vibration stimuli of different length. We asked the subjects to score the subjective perception level in a five point Lickert scale. The results suggest that the optimal duration of the control signal should be between 50 and 200 ms in this specific case. Longer durations were perceived as being irritating.",
"title": ""
},
{
"docid": "ca82ceafc6079f416d9f7b94a7a6a665",
"text": "When Big data and cloud computing join forces together, several domains like: healthcare, disaster prediction and decision making become easier and much more beneficial to users in term of information gathering, although cloud computing will reduce time and cost of analyzing information for big data, it may harm the confidentiality and integrity of the sensitive data, for instance, in healthcare, when analyzing disease's spreading area, the name of the infected people must remain secure, hence the obligation to adopt a secure model that protect sensitive data from malicious users. Several case studies on the integration of big data in cloud computing, urge on how easier it would be to analyze and manage big data in this complex envronement. Companies must consider outsourcing their sensitive data to the cloud to take advantage of its beneficial resources such as huge storage, fast calculation, and availability, yet cloud computing might harm the security of data stored and computed in it (confidentiality, integrity). Therefore, strict paradigm must be adopted by organization to obviate their outsourced data from being stolen, damaged or lost. In this paper, we compare between the existing models to secure big data implementation in the cloud computing. Then, we propose our own model to secure Big Data on the cloud computing environement, considering the lifecycle of data from uploading, storage, calculation to its destruction.",
"title": ""
},
{
"docid": "0fdd7f5c5cd1225567e89b456ef25ea0",
"text": "In this work we propose a hierarchical approach for labeling semantic objects and regions in scenes. Our approach is reminiscent of early vision literature in that we use a decomposition of the image in order to encode relational and spatial information. In contrast to much existing work on structured prediction for scene understanding, we bypass a global probabilistic model and instead directly train a hierarchical inference procedure inspired by the message passing mechanics of some approximate inference procedures in graphical models. This approach mitigates both the theoretical and empirical difficulties of learning probabilistic models when exact inference is intractable. In particular, we draw from recent work in machine learning and break the complex inference process into a hierarchical series of simple machine learning subproblems. Each subproblem in the hierarchy is designed to capture the image and contextual statistics in the scene. This hierarchy spans coarse-to-fine regions and explicitly models the mixtures of semantic labels that may be present due to imperfect segmentation. To avoid cascading of errors and overfitting, we train the learning problems in sequence to ensure robustness to likely errors earlier in the inference sequence and leverage the stacking approach developed by Cohen et al.",
"title": ""
},
{
"docid": "922354208e78ed5154b8dfbe4ed14c7e",
"text": "Digital systems, especially those for mobile applications are sensitive to power consumption, chip size and costs. Therefore they are realized using fixed-point architectures, either dedicated HW or programmable DSPs. On the other hand, system design starts from a floating-point description. These requirements have been the motivation for FRIDGE (Fixed-point pRogrammIng DesiGn Environment), a design environment for the specification, evaluation and implementation of fixed-point systems. FRIDGE offers a seamless design flow from a floating- point description to a fixed-point implementation. Within this paper we focus on two core capabilities of FRIDGE: (1) the concept of an interactive, automated transformation of floating-point programs written in ANSI-C into fixed-point specifications, based on an interpolative approach. The design time reductions that can be achieved make FRIDGE a key component for an efficient HW/SW-CoDesign. (2) a fast fixed-point simulation that performs comprehensive compile-time analyses, reducing simulation time by one order of magnitude compared to existing approaches.",
"title": ""
},
{
"docid": "bfee1553c6207909abc9820e741d6e01",
"text": "Ciphertext-policy attribute-based encryption (CP-ABE) is a promising cryptographic technique that integrates data encryption with access control for ensuring data security in IoT systems. However, the efficiency problem of CP-ABE is still a bottleneck limiting its development and application. A widespread consensus is that the computation overhead of bilinear pairing is excessive in the practical application of ABE, especially for the devices or the processors with limited computational resources and power supply. In this paper, we proposed a novel pairing-free data access control scheme based on CP-ABE using elliptic curve cryptography, abbreviated PF-CP-ABE. We replace complicated bilinear pairing with simple scalar multiplication on elliptic curves, thereby reducing the overall computation overhead. And we designed a new way of key distribution that it can directly revoke a user or an attribute without updating other users’ keys during the attribute revocation phase. Besides, our scheme use linear secret sharing scheme access structure to enhance the expressiveness of the access policy. The security and performance analysis show that our scheme significantly improved the overall efficiency as well as ensured the security.",
"title": ""
},
{
"docid": "74812252b395dca254783d05e1db0cf5",
"text": "Cyber-Physical Security Testbeds serve as valuable experimental platforms to implement and evaluate realistic, complex cyber attack-defense experiments. Testbeds, unlike traditional simulation platforms, capture communication, control and physical system characteristics and their interdependencies adequately in a unified environment. In this paper, we show how the PowerCyber CPS testbed at Iowa State was used to implement and evaluate cyber attacks on one of the fundamental Wide-Area Control applications, namely, the Automatic Generation Control (AGC). We provide a brief overview of the implementation of the experimental setup on the testbed. We then present a case study using the IEEE 9 bus system to evaluate the impacts of cyber attacks on AGC. Specifically, we analyzed the impacts of measurement based attacks that manipulated the tie-line and frequency measurements, and control based attacks that manipulated the ACE values sent to generators. We found that these attacks could potentially create under frequency conditions and could cause unnecessary load shedding. As part of future work, we plan to extend this work and utilize the experimental setup to implement other sophisticated, stealthy attack vectors and also develop attack-resilient algorithms to detect and mitigate such attacks.",
"title": ""
},
{
"docid": "2ecb4d841ef57a3acdf05cbb727aecbf",
"text": "Boosting is a general method for improving the accuracy of any given learning algorithm. This short overview paper introduces the boosting algorithm AdaBoost, and explains the underlying theory of boosting, including an explanation of why boosting often does not suffer from overfitting as well as boosting’s relationship to support-vector machines. Some examples of recent applications of boosting are also described.",
"title": ""
}
] |
scidocsrr
|
75f6be81731dbdcf1264cddd7c47d904
|
28-Gbaud PAM4 and 56-Gb/s NRZ Performance Comparison Using 1310-nm Al-BH DFB Laser
|
[
{
"docid": "30b1b4df0901ab61ab7e4cfb094589d1",
"text": "Direct modulation at 56 and 50 Gb/s of 1.3-μm InGaAlAs ridge-shaped-buried heterostructure (RS-BH) asymmetric corrugation-pitch-modulation (ACPM) distributed feedback lasers is experimentally demonstrated. The fabricated lasers have a low threshold current (5.6 mA at 85°C), high temperature characteristics (71 K), high slope relaxation frequency (3.2 GHz/mA1/2 at 85°C), and wide bandwidth (22.1 GHz at 85°C). These superior properties enable the lasers to run at 56 Gb/s and 55°C and 50 Gb/s at up to 80°C for backto-back operation with clear eye openings. This is achieved by the combination of a low-leakage RS-BH and an ACPM grating. Moreover, successful transmission of 56and 50-Gb/s modulated signals over a 10-km standard single-mode fiber is achieved. These results confirm the suitability of this type of laser for use as a cost-effective light source in 400 GbE and OTU5 applications.",
"title": ""
}
] |
[
{
"docid": "8af28adbd019c07a9d27e6189abecb7a",
"text": "We present a method for automatically generating input parsers from English specifications of input file formats. We use a Bayesian generative model to capture relevant natural language phenomena and translate the English specification into a specification tree, which is then translated into a C++ input parser. We model the problem as a joint dependency parsing and semantic role labeling task. Our method is based on two sources of information: (1) the correlation between the text and the specification tree and (2) noisy supervision as determined by the success of the generated C++ parser in reading input examples. Our results show that our approach achieves 80.0% F-Score accuracy compared to an F-Score of 66.7% produced by a state-of-the-art semantic parser on a dataset of input format specifications from the ACM International Collegiate Programming Contest (which were written in English for humans with no intention of providing support for automated processing).1",
"title": ""
},
{
"docid": "68bcc07055db2c714001ccb5f8dc96a7",
"text": "The concept of an antipodal bipolar fuzzy graph of a given bipolar fuzzy graph is introduced. Characterizations of antipodal bipolar fuzzy graphs are presented when the bipolar fuzzy graph is complete or strong. Some isomorphic properties of antipodal bipolar fuzzy graph are discussed. The notion of self median bipolar fuzzy graphs of a given bipolar fuzzy graph is also introduced.",
"title": ""
},
{
"docid": "ab2f3971a6b4f2f22e911b2367d1ef9e",
"text": "Many multimedia systems stream real-time visual data continuously for a wide variety of applications. These systems can produce vast amounts of data, but few studies take advantage of the versatile and real-time data. This paper presents a novel model based on the Convolutional Neural Networks (CNNs) to handle such imbalanced and heterogeneous data and successfully identifies the semantic concepts in these multimedia systems. The proposed model can discover the semantic concepts from the data with a skewed distribution using a dynamic sampling technique. The paper also presents a system that can retrieve real-time visual data from heterogeneous cameras, and the run-time environment allows the analysis programs to process the data from thousands of cameras simultaneously. The evaluation results in comparison with several state-of-the-art methods demonstrate the ability and effectiveness of the proposed model on visual data captured by public network cameras.",
"title": ""
},
{
"docid": "4b3425ce40e46b7a595d389d61daca06",
"text": "Genetic or acquired destabilization of the dermal extracellular matrix evokes injury- and inflammation-driven progressive soft tissue fibrosis. Dystrophic epidermolysis bullosa (DEB), a heritable human skin fragility disorder, is a paradigmatic disease to investigate these processes. Studies of DEB have generated abundant new information on cellular and molecular mechanisms at play in skin fibrosis which are not only limited to intractable diseases, but also applicable to some of the most common acquired conditions. Here, we discuss recent advances in understanding the biological and mechanical mechanisms driving the dermal fibrosis in DEB. Much of this progress is owed to the implementation of cell and tissue omics studies, which we pay special attention to. Based on the novel findings and increased understanding of the disease mechanisms in DEB, translational aspects and future therapeutic perspectives are emerging.",
"title": ""
},
{
"docid": "9b7b5d50a6578342f230b291bfeea443",
"text": "Ecological systems are generally considered among the most complex because they are characterized by a large number of diverse components, nonlinear interactions, scale multiplicity, and spatial heterogeneity. Hierarchy theory, as well as empirical evidence, suggests that complexity often takes the form of modularity in structure and functionality. Therefore, a hierarchical perspective can be essential to understanding complex ecological systems. But, how can such hierarchical approach help us with modeling spatially heterogeneous, nonlinear dynamic systems like landscapes, be they natural or human-dominated? In this paper, we present a spatially explicit hierarchical modeling approach to studying the patterns and processes of heterogeneous landscapes. We first discuss the theoretical basis for the modeling approach—the hierarchical patch dynamics (HPD) paradigm and the scaling ladder strategy, and then describe the general structure of a hierarchical urban landscape model (HPDM-PHX) which is developed using this modeling approach. In addition, we introduce a hierarchical patch dynamics modeling platform (HPD-MP), a software package that is designed to facilitate the development of spatial hierarchical models. We then illustrate the utility of HPD-MP through two examples: a hierarchical cellular automata model of land use change and a spatial multi-species population dynamics model. © 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "64a7885b10d3439feee616c359851ec4",
"text": "While humans have an incredible capacity to acquire new skills and alter their behavior as a result of experience, enhancements in performance are typically narrowly restricted to the parameters of the training environment, with little evidence of generalization to different, even seemingly highly related, tasks. Such specificity is a major obstacle for the development of many real-world training or rehabilitation paradigms, which necessarily seek to promote more general learning. In contrast to these typical findings, research over the past decade has shown that training on 'action video games' produces learning that transfers well beyond the training task. This has led to substantial interest among those interested in rehabilitation, for instance, after stroke or to treat amblyopia, or training for various precision-demanding jobs, for instance, endoscopic surgery or piloting unmanned aerial drones. Although the predominant focus of the field has been on outlining the breadth of possible action-game-related enhancements, recent work has concentrated on uncovering the mechanisms that underlie these changes, an important first step towards the goal of designing and using video games for more definite purposes. Game playing may not convey an immediate advantage on new tasks (increased performance from the very first trial), but rather the true effect of action video game playing may be to enhance the ability to learn new tasks. Such a mechanism may serve as a signature of training regimens that are likely to produce transfer of learning.",
"title": ""
},
{
"docid": "5987154caa7a77b20707a45f2f3b3545",
"text": "In community question answering (cQA), the quality of answers are determined by the matching degree between question-answer pairs and the correlation among the answers. In this paper, we show that the dependency between the answer quality labels also plays a pivotal role. To validate the effectiveness of label dependency, we propose two neural network-based models, with different combination modes of Convolutional Neural Networks, Long Short Term Memory and Conditional Random Fields. Extensive experiments are taken on the dataset released by the SemEval-2015 cQA shared task. The first model is a stacked ensemble of the networks. It achieves 58.96% on macro averaged F1, which improves the state-of-the-art neural networkbased method by 2.82% and outperforms the Top-1 system in the shared task by 1.77%. The second is a simple attention-based model whose input is the connection of the question and its corresponding answers. It produces promising results with 58.29% on overall F1 and gains the best performance on the Good and Bad categories.",
"title": ""
},
{
"docid": "51d534721e7003cf191189be37342394",
"text": "This paper addresses the problem of automatic player identification in broadcast sports videos filmed with a single side-view medium distance camera. Player identification in this setting is a challenging task because visual cues such as faces and jersey numbers are not clearly visible. Thus, this task requires sophisticated approaches to capture distinctive features from players to distinguish them. To this end, we use Convolutional Neural Networks (CNN) features extracted at multiple scales and encode them with an advanced pooling, called Fisher vector. We leverage it for exploring representations that have sufficient discriminatory power and ability to magnify subtle differences. We also analyze the distinguishing parts of the players and present a part based pooling approach to use these distinctive feature points. The resulting player representation is able to identify players even in difficult scenes. It achieves state-of-the-art results up to 96% on NBA basketball clips.",
"title": ""
},
{
"docid": "40a88d168ad559c1f68051e710c49d6b",
"text": "Modern robotic systems tend to get more complex sensors at their disposal, resulting in complex algorithms to process their data. For example, camera images are being used map their environment and plan their route. On the other hand, the robotic systems are becoming mobile more often and need to be as energy-efficient as possible; quadcopters are an example of this. These two trends interfere with each other: Data-intensive, complex algorithms require a lot of processing power, which is in general not energy-friendly nor mobile-friendly. In this paper, we describe how to move the complex algorithms to a computing platform that is not part of the mobile part of the setup, i.e. to offload the processing part to a base station. We use the ROS framework for this, as ROS provides a lot of existing computation solutions. On the mobile part of the system, our hard real-time execution framework, called LUNA, is used, to make it possible to run the loop controllers on it. The design of a `bridge node' is explained, which is used to connect the LUNA framework to ROS. The main issue to tackle is to subscribe to an arbitrary ROS topic at run-time, instead of defining the ROS topics at compile-time. Furthermore, it is shown that this principle is working and the requirements of network bandwidth are discussed.",
"title": ""
},
{
"docid": "fd48614d255b7c7bc7054b4d5de69a15",
"text": "Article history: Received 31 December 2007 Received in revised form 12 December 2008 Accepted 3 January 2009",
"title": ""
},
{
"docid": "d4ab2085eec138f99d4d490b0cbf9e3a",
"text": "A frequency-reconfigurable microstrip slot antenna is proposed. The antenna is capable of frequency switching at six different frequency bands between 2.2 and 4.75 GHz. Five RF p-i-n diode switches are positioned in the slot to achieve frequency reconfigurability. The feed line and the slot are bended to reduce 33% of the original size of the antenna. The biasing circuit is integrated into the ground plane to minimize the parasitic effects toward the performance of the antenna. Simulated and measured results are used to demonstrate the performance of the antenna. The simulated and measured return losses, together with the radiation patterns, are presented and compared.",
"title": ""
},
{
"docid": "1489207c35a613d38a4f9c06816604f0",
"text": "Switching common-mode voltage (CMV) generated by the pulse width modulation (PWM) of the inverter causes common-mode currents, which lead to motor bearing failures and electromagnetic interference problems in multiphase drives. Such switching CMV can be reduced by taking advantage of the switching states of multilevel multiphase inverters that produce zero CMV. Specific space-vector PWM (SVPWM) techniques with CMV elimination, which only use zero CMV states, have been proposed for three-level five-phase drives, and for open-end winding five-, six-, and seven-phase drives, but such methods cannot be extended to a higher number of levels or phases. This paper presents a general (for any number of levels and phases) SVPMW with CMV elimination. The proposed technique can be applied to most multilevel topologies, has low computational complexity and is suitable for low-cost hardware implementations. The new algorithm is implemented in a low-cost field-programmable gate array and it is successfully tested in the laboratory using a five-level five-phase motor drive.",
"title": ""
},
{
"docid": "e2b8dd31dad42e82509a8df6cf21df11",
"text": "Recent experiments indicate the need for revision of a model of spatial memory consisting of viewpoint-specific representations, egocentric spatial updating and a geometric module for reorientation. Instead, it appears that both egocentric and allocentric representations exist in parallel, and combine to support behavior according to the task. Current research indicates complementary roles for these representations, with increasing dependence on allocentric representations with the amount of movement between presentation and retrieval, the number of objects remembered, and the size, familiarity and intrinsic structure of the environment. Identifying the neuronal mechanisms and functional roles of each type of representation, and of their interactions, promises to provide a framework for investigation of the organization of human memory more generally.",
"title": ""
},
{
"docid": "42d2cdb17f23e22da74a405ccb71f09b",
"text": "Nostalgia is a psychological phenomenon we all can relate to but have a hard time to define. What characterizes the mental state of feeling nostalgia? What psychological function does it serve? Different published materials in a wide range of fields, from consumption research and sport science to clinical psychology, psychoanalysis and sociology, all have slightly different definition of this mental experience. Some claim it is a psychiatric disease giving melancholic emotions to a memory you would consider a happy one, while others state it enforces positivity in our mood. First in this paper a thorough review of the history of nostalgia is presented, then a look at the body of contemporary nostalgia research to see what it could be constituted of. Finally, we want to dig even deeper to see what is suggested by the literature in terms of triggers and functions. Some say that digitally recorded material like music and videos has a potential nostalgic component, which could trigger a reflection of the past in ways that was difficult before such inventions. Hinting towards that nostalgia as a cultural phenomenon is on a rising scene. Some authors say that odors have the strongest impact on nostalgic reverie due to activating it without too much cognitive appraisal. Cognitive neuropsychology has shed new light on a lot of human psychological phenomena‘s and even though empirical testing have been scarce in this field, it should get a fair scrutiny within this perspective as well and hopefully helping to clarify the definition of the word to ease future investigations, both scientifically speaking and in laymen‘s retro hysteria.",
"title": ""
},
{
"docid": "25eedd2defb9e0a0b22e44195a4b767b",
"text": "Social media such as Twitter have become an important method of communication, with potential opportunities for NLG to facilitate the generation of social media content. We focus on the generation of indicative tweets that contain a link to an external web page. While it is natural and tempting to view the linked web page as the source text from which the tweet is generated in an extractive summarization setting, it is unclear to what extent actual indicative tweets behave like extractive summaries. We collect a corpus of indicative tweets with their associated articles and investigate to what extent they can be derived from the articles using extractive methods. We also consider the impact of the formality and genre of the article. Our results demonstrate the limits of viewing indicative tweet generation as extractive summarization, and point to the need for the development of a methodology for tweet generation that is sensitive to genre-specific issues.",
"title": ""
},
{
"docid": "6f8e565aff657cbc1b65217d72ead3ab",
"text": "This paper explores patterns of adoption and use of information and communications technology (ICT) by small and medium sized enterprises (SMEs) in the southwest London and Thames Valley region of England. The paper presents preliminary results of a survey of around 400 SMEs drawn from four economically significant sectors in the region: food processing, transport and logistics, media and Internet services. The main objectives of the study were to explore ICT adoption and use patterns by SMEs, to identify factors enabling or inhibiting the successful adoption and use of ICT, and to explore the effectiveness of government policy mechanisms at national and regional levels. While our main result indicates a generally favourable attitude to ICT amongst the SMEs surveyed, it also suggests a failure to recognise ICT’s strategic potential. A surprising result was the overwhelming ignorance of regional, national and European Union wide policy initiatives to support SMEs. This strikes at the very heart of regional, national and European policy that have identified SMEs as requiring specific support mechanisms. Our findings from one of the UK’s most productive regions therefore have important implications for policy aimed at ICT adoption and use by SMEs.",
"title": ""
},
{
"docid": "8d197bf27af825b9972a490d3cc9934c",
"text": "The past decade has witnessed an increasing adoption of cloud database technology, which provides better scalability, availability, and fault-tolerance via transparent partitioning and replication, and automatic load balancing and fail-over. However, only a small number of cloud databases provide strong consistency guarantees for distributed transactions, despite decades of research on distributed transaction processing, due to practical challenges that arise in the cloud setting, where failures are the norm, and human administration is minimal. For example, dealing with locks left by transactions initiated by failed machines, and determining a multi-programming level that avoids thrashing without under-utilizing available resources, are some of the challenges that arise when using lock-based transaction processing mechanisms in the cloud context. Even in the case of optimistic concurrency control, most proposals in the literature deal with distributed validation but still require the database to acquire locks during two-phase commit when installing updates of a single transaction on multiple machines. Very little theoretical work has been done to entirely eliminate the need for locking in distributed transactions, including locks acquired during two-phase commit. In this paper, we re-design optimistic concurrency control to eliminate any need for locking even for atomic commitment, while handling the practical issues in earlier theoretical work related to this problem. We conduct an extensive experimental study to evaluate our approach against lock-based methods under various setups and workloads, and demonstrate that our approach provides many practical advantages in the cloud context.",
"title": ""
},
{
"docid": "7b8dffab502fae2abbea65464e2727aa",
"text": "Bone tissue is continuously remodeled through the concerted actions of bone cells, which include bone resorption by osteoclasts and bone formation by osteoblasts, whereas osteocytes act as mechanosensors and orchestrators of the bone remodeling process. This process is under the control of local (e.g., growth factors and cytokines) and systemic (e.g., calcitonin and estrogens) factors that all together contribute for bone homeostasis. An imbalance between bone resorption and formation can result in bone diseases including osteoporosis. Recently, it has been recognized that, during bone remodeling, there are an intricate communication among bone cells. For instance, the coupling from bone resorption to bone formation is achieved by interaction between osteoclasts and osteoblasts. Moreover, osteocytes produce factors that influence osteoblast and osteoclast activities, whereas osteocyte apoptosis is followed by osteoclastic bone resorption. The increasing knowledge about the structure and functions of bone cells contributed to a better understanding of bone biology. It has been suggested that there is a complex communication between bone cells and other organs, indicating the dynamic nature of bone tissue. In this review, we discuss the current data about the structure and functions of bone cells and the factors that influence bone remodeling.",
"title": ""
},
{
"docid": "e01d5be587c73aaa133acb3d8aaed996",
"text": "This paper presents a new optimization-based method to control three micro-scale magnetic agents operating in close proximity to each other for applications in microrobotics. Controlling multiple magnetic microrobots close to each other is difficult due to magnetic interactions between the agents, and here we seek to control those interactions for the creation of desired multi-agent formations. Our control strategy arises from physics that apply force in the negative direction of states errors. The objective is to regulate the inter-agent spacing, heading and position of the set of agents, for motion in two dimensions, while the system is inherently underactuated. Simulation results on three agents and a proof-of-concept experiment on two agents show the feasibility of the idea to shed light on future micro/nanoscale multi-agent explorations. Average tracking error of less than 50 micrometers and 1.85 degrees is accomplished for the regulation of the inter-agent space and the pair heading angle, respectively, for identical spherical-shape agents with nominal radius less than of 250 micrometers operating within several body-lengths of each other.",
"title": ""
},
{
"docid": "b66dc21916352dc19a25073b32331edb",
"text": "We present a framework and algorithm to analyze first person RGBD videos captured from the robot while physically interacting with humans. Specifically, we explore reactions and interactions of persons facing a mobile robot from a robot centric view. This new perspective offers social awareness to the robots, enabling interesting applications. As far as we know, there is no public 3D dataset for this problem. Therefore, we record two multi-modal first-person RGBD datasets that reflect the setting we are analyzing. We use a humanoid and a non-humanoid robot equipped with a Kinect. Notably, the videos contain a high percentage of ego-motion due to the robot self-exploration as well as its reactions to the persons' interactions. We show that separating the descriptors extracted from ego-motion and independent motion areas, and using them both, allows us to achieve superior recognition results. Experiments show that our algorithm recognizes the activities effectively and outperforms other state-of-the-art methods on related tasks.",
"title": ""
}
] |
scidocsrr
|
91b725d0bb28af9d1e260ebf50e01592
|
Deep Learning for Lip Reading using Audio-Visual Information for Urdu Language
|
[
{
"docid": "aa70b48d3831cf56b1432ebba6960797",
"text": "Lipreading is the task of decoding text from the movement of a speaker’s mouth. Traditional approaches separated the problem into two stages: designing or learning visual features, and prediction. More recent deep lipreading approaches are end-to-end trainable (Wand et al., 2016; Chung & Zisserman, 2016a). All existing works, however, perform only word classification, not sentence-level sequence prediction. Studies have shown that human lipreading performance increases for longer words (Easton & Basala, 1982), indicating the importance of features capturing temporal context in an ambiguous communication channel. Motivated by this observation, we present LipNet, a model that maps a variable-length sequence of video frames to text, making use of spatiotemporal convolutions, an LSTM recurrent network, and the connectionist temporal classification loss, trained entirely end-to-end. To the best of our knowledge, LipNet is the first lipreading model to operate at sentence-level, using a single end-to-end speaker-independent deep model to simultaneously learn spatiotemporal visual features and a sequence model. On the GRID corpus, LipNet achieves 93.4% accuracy, outperforming experienced human lipreaders and the previous 79.6% state-of-the-art accuracy.",
"title": ""
}
] |
[
{
"docid": "bf559d41d2f0fb72ccb2331418573acc",
"text": "Massive Open Online Courses or MOOCs, both in their major approaches xMOOC and cMOOC, are attracting a very interesting debate about their influence in the higher education future. MOOC have both great defenders and detractors. As an emerging trend, MOOCs have a hype repercussion in technological and pedagogical areas, but they should demonstrate their real value in specific implementation and within institutional strategies. Independently, MOOCs have different issues such as high dropout rates and low number of cooperative activities among participants. This paper presents an adaptive proposal to be applied in MOOC definition and development, with a special attention to cMOOC, that may be useful to tackle the mentioned MOOC problems.",
"title": ""
},
{
"docid": "5853f6f7405a73fe6587f2fd0b4d9419",
"text": "Substantial evidence suggests that mind-wandering typically occurs at a significant cost to performance. Mind-wandering–related deficits in performance have been observed in many contexts, most notably reading, tests of sustained attention, and tests of aptitude. Mind-wandering has been shown to negatively impact reading comprehension and model building, impair the ability to withhold automatized responses, and disrupt performance on tests of working memory and intelligence. These empirically identified costs of mind-wandering have led to the suggestion that mind-wandering may represent a pure failure of cognitive control and thus pose little benefit. However, emerging evidence suggests that the role of mind-wandering is not entirely pernicious. Recent studies have shown that mind-wandering may play a crucial role in both autobiographical planning and creative problem solving, thus providing at least two possible adaptive functions of the phenomenon. This article reviews these observed costs and possible functions of mind-wandering and identifies important avenues of future inquiry.",
"title": ""
},
{
"docid": "04500f0dbf48d3c1d8eb02ed43d46e00",
"text": "The coverage of a test suite is often used as a proxy for its ability to detect faults. However, previous studies that investigated the correlation between code coverage and test suite effectiveness have failed to reach a consensus about the nature and strength of the relationship between these test suite characteristics. Moreover, many of the studies were done with small or synthetic programs, making it unclear whether their results generalize to larger programs, and some of the studies did not account for the confounding influence of test suite size. In addition, most of the studies were done with adequate suites, which are are rare in practice, so the results may not generalize to typical test suites. \n We have extended these studies by evaluating the relationship between test suite size, coverage, and effectiveness for large Java programs. Our study is the largest to date in the literature: we generated 31,000 test suites for five systems consisting of up to 724,000 lines of source code. We measured the statement coverage, decision coverage, and modified condition coverage of these suites and used mutation testing to evaluate their fault detection effectiveness. \n We found that there is a low to moderate correlation between coverage and effectiveness when the number of test cases in the suite is controlled for. In addition, we found that stronger forms of coverage do not provide greater insight into the effectiveness of the suite. Our results suggest that coverage, while useful for identifying under-tested parts of a program, should not be used as a quality target because it is not a good indicator of test suite effectiveness.",
"title": ""
},
{
"docid": "ba87d116445aa7a2ced1b9d8333c0b7a",
"text": "Study Design Systematic review with meta-analysis. Background The addition of hip strengthening to knee strengthening for persons with patellofemoral pain has the potential to optimize treatment effects. There is a need to systematically review and pool the current evidence in this area. Objective To examine the efficacy of hip strengthening, associated or not with knee strengthening, to increase strength, reduce pain, and improve activity in individuals with patellofemoral pain. Methods A systematic review of randomized and/or controlled trials was performed. Participants in the reviewed studies were individuals with patellofemoral pain, and the experimental intervention was hip and knee strengthening. Outcome data related to muscle strength, pain, and activity were extracted from the eligible trials and combined in a meta-analysis. Results The review included 14 trials involving 673 participants. Random-effects meta-analyses revealed that hip and knee strengthening decreased pain (mean difference, -3.3; 95% confidence interval [CI]: -5.6, -1.1) and improved activity (standardized mean difference, 1.4; 95% CI: 0.03, 2.8) compared to no training/placebo. In addition, hip and knee strengthening was superior to knee strengthening alone for decreasing pain (mean difference, -1.5; 95% CI: -2.3, -0.8) and improving activity (standardized mean difference, 0.7; 95% CI: 0.2, 1.3). Results were maintained beyond the intervention period. Meta-analyses showed no significant changes in strength for any of the interventions. Conclusion Hip and knee strengthening is effective and superior to knee strengthening alone for decreasing pain and improving activity in persons with patellofemoral pain; however, these outcomes were achieved without a concurrent change in strength. Level of Evidence Therapy, level 1a-. J Orthop Sports Phys Ther 2018;48(1):19-31. Epub 15 Oct 2017. doi:10.2519/jospt.2018.7365.",
"title": ""
},
{
"docid": "242977c8b2a5768b18fc276309407d60",
"text": "We present a parser that relies primarily on extracting information directly from surface spans rather than on propagating information through enriched grammar structure. For example, instead of creating separate grammar symbols to mark the definiteness of an NP, our parser might instead capture the same information from the first word of the NP. Moving context out of the grammar and onto surface features can greatly simplify the structural component of the parser: because so many deep syntactic cues have surface reflexes, our system can still parse accurately with context-free backbones as minimal as Xbar grammars. Keeping the structural backbone simple and moving features to the surface also allows easy adaptation to new languages and even to new tasks. On the SPMRL 2013 multilingual constituency parsing shared task (Seddah et al., 2013), our system outperforms the top single parser system of Björkelund et al. (2013) on a range of languages. In addition, despite being designed for syntactic analysis, our system also achieves stateof-the-art numbers on the structural sentiment task of Socher et al. (2013). Finally, we show that, in both syntactic parsing and sentiment analysis, many broad linguistic trends can be captured via surface features.",
"title": ""
},
{
"docid": "899243c1b24010cc5423e2ac9a7f6df2",
"text": "The introduction of wearable video cameras (e.g., GoPro) in the consumer market has promoted video life-logging, motivating users to generate large amounts of video data. This increasing flow of first-person video has led to a growing need for automatic video summarization adapted to the characteristics and applications of egocentric video. With this paper, we provide the first comprehensive survey of the techniques used specifically to summarize egocentric videos. We present a framework for first-person view summarization and compare the segmentation methods and selection algorithms used by the related work in the literature. Next, we describe the existing egocentric video datasets suitable for summarization and, then, the various evaluation methods. Finally, we analyze the challenges and opportunities in the field and propose new lines of research.",
"title": ""
},
{
"docid": "95403ce714b2102ca5f50cfc4d838e07",
"text": "Recently, vision-based Advanced Driver Assist Systems have gained broad interest. In this work, we investigate free-space detection, for which we propose to employ a Fully Convolutional Network (FCN). We show that this FCN can be trained in a self-supervised manner and achieve similar results compared to training on manually annotated data, thereby reducing the need for large manually annotated training sets. To this end, our self-supervised training relies on a stereo-vision disparity system, to automatically generate (weak) training labels for the color-based FCN. Additionally, our self-supervised training facilitates online training of the FCN instead of offline. Consequently, given that the applied FCN is relatively small, the free-space analysis becomes highly adaptive to any traffic scene that the vehicle encounters. We have validated our algorithm using publicly available data and on a new challenging benchmark dataset that is released with this paper. Experiments show that the online training boosts performance with 5% when compared to offline training, both for Fmax and AP .",
"title": ""
},
{
"docid": "d06393c467e19b0827eea5f86bbf4e98",
"text": "This paper presents the results of a systematic review of existing literature on the integration of agile software development with user-centered design approaches. It shows that a common process model underlies such approaches and discusses which artifacts are used to support the collaboration between designers and developers.",
"title": ""
},
{
"docid": "be8c7050c87ad0344f8ddd10c7832bb4",
"text": "A novel method of map matching using the Global Positioning System (GPS) has been developed for civilian use, which uses digital mapping data to infer the <100 metres systematic position errors which result largely from “selective availability” (S/A) imposed by the U.S. military. The system tracks a vehicle on all possible roads (road centre-lines) in a computed error region, then uses a method of rapidly detecting inappropriate road centre-lines from the set of all those possible. This is called the Road Reduction Filter (RRF) algorithm. Point positioning is computed using C/A code pseudorange measurements direct from a GPS receiver. The least squares estimation is performed in the software developed for the experiment described in this paper. Virtual differential GPS (VDGPS) corrections are computed and used from a vehicle’s previous positions, thus providing an autonomous alternative to DGPS for in-car navigation and fleet management. Height aiding is used to augment the solution and reduce the number of satellites required for a position solution. Ordnance Survey (OS) digital map data was used for the experiment, i.e. OSCAR 1m resolution road centre-line geometry and Land Form PANORAMA 1:50,000, 50m-grid digital terrain model (DTM). Testing of the algorithm is reported and results are analysed. Vehicle positions provided by RRF are compared with the “true” position determined using high precision (cm) GPS carrier phase techniques. It is shown that height aiding using a DTM and the RRF significantly improve the accuracy of position provided by inexpensive single frequency GPS receivers. INTRODUCTION The accurate location of a vehicle on a highway network model is fundamental to any in-car-navigation system, personal navigation assistant, fleet management system, National Mayday System (Carstensen, 1998) and many other applications that provide a current vehicle location, a digital map and perhaps directions or route guidance. A great",
"title": ""
},
{
"docid": "c66c1523322809d1b2d1279b5b2b8384",
"text": "The design of the Smart Grid requires solving a complex problem of combined sensing, communications and control and, thus, the problem of choosing a networking technology cannot be addressed without also taking into consideration requirements related to sensor networking and distributed control. These requirements are today still somewhat undefined so that it is not possible yet to give quantitative guidelines on how to choose one communication technology over the other. In this paper, we make a first qualitative attempt to better understand the role that Power Line Communications (PLCs) can have in the Smart Grid. Furthermore, we here report recent results on the electrical and topological properties of the power distribution network. The topological characterization of the power grid is not only important because it allows us to model the grid as an information source, but also because the grid becomes the actual physical information delivery infrastructure when PLCs are used.",
"title": ""
},
{
"docid": "1cbd70bddd09be198f6695209786438d",
"text": "In this research work a neural network based technique to be applied on condition monitoring and diagnosis of rotating machines equipped with hydrostatic self levitating bearing system is presented. Based on fluid measured data, such pressures and temperature, vibration analysis based diagnosis is being carried out by determining the vibration characteristics of the rotating machines on the basis of signal processing tasks. Required signals are achieved by conversion of measured data (fluid temperature and pressures) into virtual data (vibration magnitudes) by means of neural network functional approximation techniques.",
"title": ""
},
{
"docid": "02842ef0acfed3d59612ce944b948adf",
"text": "One of the common modalities for observing mental activity is electroencephalogram (EEG) signals. However, EEG recording is highly susceptible to various sources of noise and to inter subject differences. In order to solve these problems we present a deep recurrent neural network (RNN) architecture to learn robust features and predict the levels of cognitive load from EEG recordings. Using a deep learning approach, we first transform the EEG time series into a sequence of multispectral images which carries spatial information. Next, we train our recurrent hybrid network to learn robust representations from the sequence of frames. The proposed approach preserves spectral, spatial and temporal structures and extracts features which are less sensitive to variations along each dimension. Our results demonstrate cognitive memory load prediction across four different levels with an overall accuracy of 92.5% during the memory task execution and reduce classification error to 7.61% in comparison to other state-of-art techniques.",
"title": ""
},
{
"docid": "93fad9723826fcf99ae229b4e7298a31",
"text": "In this work, we provide the first construction of Attribute-Based Encryption (ABE) for general circuits. Our construction is based on the existence of multilinear maps. We prove selective security of our scheme in the standard model under the natural multilinear generalization of the BDDH assumption. Our scheme achieves both Key-Policy and Ciphertext-Policy variants of ABE. Our scheme and its proof of security directly translate to the recent multilinear map framework of Garg, Gentry, and Halevi.",
"title": ""
},
{
"docid": "af7379b6c3ecf9abaf3da04d06f06d00",
"text": "AIM\nThe purpose of the current study was to examine the effect of Astaxanthin (Asx) supplementation on muscle enzymes as indirect markers of muscle damage, oxidative stress markers and antioxidant response in elite young soccer players.\n\n\nMETHODS\nThirty-two male elite soccer players were randomly assigned in a double-blind fashion to Asx and placebo (P) group. After the 90 days of supplementation, the athletes performed a 2 hour acute exercise bout. Blood samples were obtained before and after 90 days of supplementation and after the exercise at the end of observational period for analysis of thiobarbituric acid-reacting substances (TBARS), advanced oxidation protein products (AOPP), superoxide anion (O2•¯), total antioxidative status (TAS), sulphydril groups (SH), superoxide-dismutase (SOD), serum creatine kinase (CK) and aspartate aminotransferase (AST).\n\n\nRESULTS\nTBARS and AOPP levels did not change throughout the study. Regular training significantly increased O2•¯ levels (main training effect, P<0.01). O2•¯ concentrations increased after the soccer exercise (main exercise effect, P<0.01), but these changes reached statistical significance only in the P group (exercise x supplementation effect, P<0.05). TAS levels decreased significantly post- exercise only in P group (P<0.01). Both Asx and P groups experienced increase in total SH groups content (by 21% and 9%, respectively) and supplementation effect was marginally significant (P=0.08). Basal SOD activity significantly decreased both in P and in Asx group by the end of the study (main training effect, P<0.01). All participants showed a significant decrease in basal CK and AST activities after 90 days (main training effect, P<0.01 and P<0.001, respectively). CK and AST activities in serum significantly increased as result of soccer exercise (main exercise effect, P<0.001 and P<0.01, respectively). Postexercise CK and AST levels were significantly lower in Asx group compared to P group (P<0.05)\n\n\nCONCLUSION\nThe results of the present study suggest that soccer training and soccer exercise are associated with excessive production of free radicals and oxidative stress, which might diminish antioxidant system efficiency. Supplementation with Asx could prevent exercise induced free radical production and depletion of non-enzymatic antioxidant defense in young soccer players.",
"title": ""
},
{
"docid": "681219aa3cb0fb579e16c734d2e357f7",
"text": "Brain computer interface (BCI) is an assistive technology, which decodes neurophysiological signals generated by the human brain and translates them into control signals to control external devices, e.g., wheelchairs. One problem challenging noninvasive BCI technologies is the limited control dimensions from decoding movements of, mainly, large body parts, e.g., upper and lower limbs. It has been reported that complicated dexterous functions, i.e., finger movements, can be decoded in electrocorticography (ECoG) signals, while it remains unclear whether noninvasive electroencephalography (EEG) signals also have sufficient information to decode the same type of movements. Phenomena of broadband power increase and low-frequency-band power decrease were observed in EEG in the present study, when EEG power spectra were decomposed by a principal component analysis (PCA). These movement-related spectral structures and their changes caused by finger movements in EEG are consistent with observations in previous ECoG study, as well as the results from ECoG data in the present study. The average decoding accuracy of 77.11% over all subjects was obtained in classifying each pair of fingers from one hand using movement-related spectral changes as features to be decoded using a support vector machine (SVM) classifier. The average decoding accuracy in three epilepsy patients using ECoG data was 91.28% with the similarly obtained features and same classifier. Both decoding accuracies of EEG and ECoG are significantly higher than the empirical guessing level (51.26%) in all subjects (p<0.05). The present study suggests the similar movement-related spectral changes in EEG as in ECoG, and demonstrates the feasibility of discriminating finger movements from one hand using EEG. These findings are promising to facilitate the development of BCIs with rich control signals using noninvasive technologies.",
"title": ""
},
{
"docid": "934ee0b55bf90eed86fabfff8f1238d1",
"text": "Schelling (1969, 1971a,b, 1978) considered a simple proximity model of segregation where individual agents only care about the types of people living in their own local geographical neighborhood, the spatial structure being represented by oneor two-dimensional lattices. In this paper, we argue that segregation might occur not only in the geographical space, but also in social environments. Furthermore, recent empirical studies have documented that social interaction structures are well-described by small-world networks. We generalize Schelling’s model by allowing agents to interact in small-world networks instead of regular lattices. We study two alternative dynamic models where agents can decide to move either arbitrarily far away (global model) or are bound to choose an alternative location in their social neighborhood (local model). Our main result is that the system attains levels of segregation that are in line with those reached in the lattice-based spatial proximity model. Thus, Schelling’s original results seem to be robust to the structural properties of the network.",
"title": ""
},
{
"docid": "a73bc91aa52aa15dbe30fbabac5d0ecc",
"text": "In this paper we address the problem of multivariate outlier detection using the (unsupervised) self-organizing map (SOM) algorithm introduced by Kohonen. We examine a number of techniques, based on summary statistics and graphics derived from the trained SOM, and conclude that they work well in cooperation with each other. Useful tools include the median interneuron distance matrix and the projection of the trained map (via Sammon’s mapping). SOM quantization errors provide an important complementary source of information for certain type of outlying behavior. Empirical results are reported on both artificial and real data.",
"title": ""
},
{
"docid": "127b8dfb562792d02a4c09091e09da90",
"text": "Current approaches to conservation and natural-resource management often focus on single objectives, resulting in many unintended consequences. These outcomes often affect society through unaccounted-for ecosystem services. A major challenge in moving to a more ecosystem-based approach to management that would avoid such societal damages is the creation of practical tools that bring a scientifically sound, production function-based approach to natural-resource decision making. A new set of computer-based models is presented, the Integrated Valuation of Ecosystem Services and Tradeoffs tool (InVEST) that has been designed to inform such decisions. Several of the key features of these models are discussed, including the ability to visualize relationships among multiple ecosystem services and biodiversity, the ability to focus on ecosystem services rather than biophysical processes, the ability to project service levels and values in space, sensitivity to manager-designed scenarios, and flexibility to deal with data and knowledge limitations. Sample outputs of InVEST are shown for two case applications; the Willamette Basin in Oregon and the Amazon Basin. Future challenges relating to the incorporation of social data, the projection of social distributional effects, and the design of effective policy mechanisms are discussed.",
"title": ""
},
{
"docid": "bc3e2c94cd53f472f36229e2d9c5d69e",
"text": "The problem of planning safe tra-jectories for computer controlled manipulators with two movable links and multiple degrees of freedom is analyzed, and a solution to the problem proposed. The key features of the solution are: 1. the identification of trajectory primitives and a hierarchy of abstraction spaces that permit simple manip-ulator models, 2. the characterization of empty space by approximating it with easily describable entities called charts-the approximation is dynamic and can be selective, 3. a scheme for planning motions close to obstacles that is computationally viable, and that suggests how proximity sensors might be used to do the planning, and 4. the use of hierarchical decomposition to reduce the complexity of the planning problem. 1. INTRODUCTION the 2D and 3D solution noted, it is easy to visualize the solution for the 3D manipulator. Section 2 of this paper presents an example, and Section 3 a statement and analysis of the problem. Sections 4 and 5 present the solution. Section 6 summarizes the key ideas in the solution and indicates areas for future work. 2. AN EXAMPLE This section describes an example (Figure 2.1) of the collision detection and avoidance problem for a two-dimensional manipulator. The example highlights features of the problem and its solution. 2.1 The Problem The manipulator has two links and three degrees of freedom. The larger link, called the boom, slides back and forth and can rotate about the origin. The smaller link, called the forearm, has a rotational degree of freedom about the tip of the boom. The tip of the forearm is called the hand. S and G are the initial and final configurations of the manipulator. Any real manipulator's links will have physical dimensions. The line segment representation of the link is an abstraction; the physical dimensions can be accounted for and how this is done is described later. The problem of planning safe trajectories for computer controlled manipulators with two movable links and multiple degrees of freedom is analyzed, and a solution to the problem is presented. The trajectory planning system is initialized with a description of the part of the environment that the manipulator is to maneuver in. When given the goal position and orientation of the hand, the system plans a complete trajectory that will safely maneuver the manipulator into the goal configuration. The executive system in charge of operating the hardware uses this trajectory to physically move the manipulator. …",
"title": ""
}
] |
scidocsrr
|
515d426102670dffd3391507bf5dff3f
|
A survey of Context-aware Recommender System and services
|
[
{
"docid": "6bfdd78045816085cd0fa5d8bb91fd18",
"text": "Contextual factors can greatly influence the users' preferences in listening to music. Although it is hard to capture these factors directly, it is possible to see their effects on the sequence of songs liked by the user in his/her current interaction with the system. In this paper, we present a context-aware music recommender system which infers contextual information based on the most recent sequence of songs liked by the user. Our approach mines the top frequent tags for songs from social tagging Web sites and uses topic modeling to determine a set of latent topics for each song, representing different contexts. Using a database of human-compiled playlists, each playlist is mapped into a sequence of topics and frequent sequential patterns are discovered among these topics. These patterns represent frequent sequences of transitions between the latent topics representing contexts. Given a sequence of songs in a user's current interaction, the discovered patterns are used to predict the next topic in the playlist. The predicted topics are then used to post-filter the initial ranking produced by a traditional recommendation algorithm. Our experimental evaluation suggests that our system can help produce better recommendations in comparison to a conventional recommender system based on collaborative or content-based filtering. Furthermore, the topic modeling approach proposed here is also useful in providing better insight into the underlying reasons for song selection and in applications such as playlist construction and context prediction.",
"title": ""
},
{
"docid": "bf1dd3cf77750fe5e994fd6c192ba1be",
"text": "Increasingly manufacturers of smartphone devices are utilising a diverse range of sensors. This innovation has enabled developers to accurately determine a user's current context. In recent years there has also been a renewed requirement to use more types of context and reduce the current over-reliance on location as a context. Location based systems have enjoyed great success and this context is very important for mobile devices. However, using additional context data such as weather, time, social media sentiment and user preferences can provide a more accurate model of the user's current context. One area that has been significantly improved by the increased use of context in mobile applications is tourism. Traditionally tour guide applications rely heavily on location and essentially ignore other types of context. This has led to problems of inappropriate suggestions, due to inadequate content filtering and tourists experiencing information overload. These problems can be mitigated if appropriate personalisation and content filtering is performed. The intelligent decision making that this paper proposes with regard to the development of the VISIT [17] system, is a hybrid based recommendation approach made up of collaborative filtering, content based recommendation and demographic profiling. Intelligent reasoning will then be performed as part of this hybrid system to determine the weight/importance of each different context type.",
"title": ""
}
] |
[
{
"docid": "ab7e775e95f18c27210941602624340f",
"text": "Introduction Blepharophimosis, ptosis and epicanthus inversus (BPES) is an autosomal dominant condition caused by mutations in the FOXL2 gene, located on chromosome 3q23 (Beysen et al., 2005). It has frequently been reported in association with 3q chromosomal deletions and rearrangements (Franceschini et al., 1983; Al-Awadi et al., 1986; Alvarado et al., 1987; Jewett et al., 1993; Costa et al., 1998). In a review of the phenotype associated with interstitial deletions of the long arm of chromosome 3, Alvarado et al. (1987) commented on brachydactyly and abnormalities of the hands and feet. A syndrome characterized by growth and developmental delay, microcephaly, ptosis, blepharophimosis, large abnormally shaped ears, and a prominent nasal tip was suggested to be associated with deletion of the 3q2 region.",
"title": ""
},
{
"docid": "7e66fa86706c823915cce027dc3c5db8",
"text": "Very recently, some studies on neural dependency parsers have shown advantage over the traditional ones on a wide variety of languages. However, for graphbased neural dependency parsing systems, they either count on the long-term memory and attention mechanism to implicitly capture the high-order features or give up the global exhaustive inference algorithms in order to harness the features over a rich history of parsing decisions. The former might miss out the important features for specific headword predictions without the help of the explicit structural information, and the latter may suffer from the error propagation as false early structural constraints are used to create features when making future predictions. We explore the feasibility of explicitly taking high-order features into account while remaining the main advantage of global inference and learning for graph-based parsing. The proposed parser first forms an initial parse tree by head-modifier predictions based on the first-order factorization. High-order features (such as grandparent, sibling, and uncle) then can be defined over the initial tree, and used to refine the parse tree in an iterative fashion. Experimental results showed that our model (called INDP) archived competitive performance to existing benchmark parsers on both English and Chinese datasets.",
"title": ""
},
{
"docid": "02c00f1fd4efffd5f01c165b64fb3f5e",
"text": "A fundamental question in human memory is how the brain represents sensory-specific information during the process of retrieval. One hypothesis is that regions of sensory cortex are reactivated during retrieval of sensory-specific information (1). Here we report findings from a study in which subjects learned a set of picture and sound items and were then given a recall test during which they vividly remembered the items while imaged by using event-related functional MRI. Regions of visual and auditory cortex were activated differentially during retrieval of pictures and sounds, respectively. Furthermore, the regions activated during the recall test comprised a subset of those activated during a separate perception task in which subjects actually viewed pictures and heard sounds. Regions activated during the recall test were found to be represented more in late than in early visual and auditory cortex. Therefore, results indicate that retrieval of vivid visual and auditory information can be associated with a reactivation of some of the same sensory regions that were activated during perception of those items.",
"title": ""
},
{
"docid": "2c328d1dd45733ad8063ea89a6b6df43",
"text": "We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvement. We study RPL in five challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently.",
"title": ""
},
{
"docid": "11b953431233ef988f11c5139a4eb484",
"text": "In this paper, we forecast the reading of an air quality monitoring station over the next 48 hours, using a data-driven method that considers current meteorological data, weather forecasts, and air quality data of the station and that of other stations within a few hundred kilometers. Our predictive model is comprised of four major components: 1) a linear regression-based temporal predictor to model the local factors of air quality, 2) a neural network-based spatial predictor to model global factors, 3) a dynamic aggregator combining the predictions of the spatial and temporal predictors according to meteorological data, and 4) an inflection predictor to capture sudden changes in air quality. We evaluate our model with data from 43 cities in China, surpassing the results of multiple baseline methods. We have deployed a system with the Chinese Ministry of Environmental Protection, providing 48-hour fine-grained air quality forecasts for four major Chinese cities every hour. The forecast function is also enabled on Microsoft Bing Map and MS cloud platform Azure. Our technology is general and can be applied globally for other cities.",
"title": ""
},
{
"docid": "9c41df95c11ec4bed3e0b19b20f912bb",
"text": "Text mining has been defined as “the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources” [6]. Many other industries and areas can also benefit from the text mining tools that are being developed by a number of companies. This paper provides an overview of the text mining tools and technologies that are being developed and is intended to be a guide for organizations who are looking for the most appropriate text mining techniques for their situation. This paper also concentrates to design text and data mining tool to extract the valuable information from curriculum vitae according to concerned requirements. The tool clusters the curriculum vitae into several segments which will help the public and private concerns for their recruitment. Rule based approach is used to develop the algorithm for mining and also it is implemented to extract the valuable information from the curriculum vitae on the web. Analysis of Curriculum vitae is until now, a costly and manual activity. It is subject to all typical variations and limitations in its quality, depending of who is doing it. Automating this analysis using algorithms might deliver much more consistency and preciseness to support the human experts. The experiments involve cooperation with many people having their CV online, as well as several recruiters etc. The algorithms must be developed and improved for processing of existing sets of semi-structured documents information retrieval under uncertainity about quality of the sources.",
"title": ""
},
{
"docid": "7c829563e98a6c75eb9b388bf0627271",
"text": "Research in learning analytics and educational data mining has recently become prominent in the fields of computer science and education. Most scholars in the field emphasize student learning and student data analytics; however, it is also important to focus on teaching analytics and teacher preparation because of their key roles in student learning, especially in K-12 learning environments. Nonverbal communication strategies play an important role in successful interpersonal communication of teachers with their students. In order to assist novice or practicing teachers with exhibiting open and affirmative nonverbal cues in their classrooms, we have designed a multimodal teaching platform with provisions for online feedback. We used an interactive teaching rehearsal software, TeachLivE, as our basic research environment. TeachLivE employs a digital puppetry paradigm as its core technology. Individuals walk into this virtual environment and interact with virtual students displayed on a large screen. They can practice classroom management, pedagogy and content delivery skills with a teaching plan in the TeachLivE environment. We have designed an experiment to evaluate the impact of an online nonverbal feedback application. In this experiment, different types of multimodal data have been collected during two experimental settings. These data include talk-time and nonverbal behaviors of the virtual students, captured in log files; talk time and full body tracking data of the participant; and video recording of the virtual classroom with the participant. 34 student teachers participated in this 30-minute experiment. In each of the settings, the participants were provided with teaching plans from which they taught. All the participants took part in both of the experimental settings. In order to have a balanced experiment design, half of the participants received nonverbal online feedback in their first session and the other half received this feedback in the second session. A visual indication was used for feedback each time the participant exhibited a closed, defensive posture. Based on recorded full-body tracking data, we observed that only those who received feedback in their first session demonstrated a significant number of open postures in the session containing no feedback. However, the post-questionnaire information indicated that all participants were more mindful of their body postures while teaching after they had participated in the study.",
"title": ""
},
{
"docid": "2a4ea3452fe02605144a569797bead9a",
"text": "A novel proximity-coupled probe-fed stacked patch antenna is proposed for Global Navigation Satellite Systems (GNSS) applications. The antenna has been designed to operate for the satellite navigation frequencies in L-band including GPS, GLONASS, Galileo, and Compass (1164-1239 MHz and 1559-1610 MHz). A key feature of our design is the proximity-coupled probe feeds to increase impedance bandwidth and the integrated 90deg broadband balun to improve polarization purity. The final antenna exhibits broad pattern coverage, high gain at the low angles (more than -5 dBi), and VSWR <1.5 for all the operating bands. The design procedures and employed tuning techniques to achieve the desired performance are presented.",
"title": ""
},
{
"docid": "f237b861e72d79008305abea9f53547d",
"text": "Biopharmaceutical companies attempting to increase productivity through novel discovery technologies have fallen short of achieving the desired results. Repositioning existing drugs for new indications could deliver the productivity increases that the industry needs while shifting the locus of production to biotechnology companies. More and more companies are scanning the existing pharmacopoeia for repositioning candidates, and the number of repositioning success stories is increasing.",
"title": ""
},
{
"docid": "e28f0e9aea0b1e8b61b8a425d61eb1ab",
"text": "For applications in navigation and robotics, estimating the 3D pose of objects is as important as detection. Many approaches to pose estimation rely on detecting or tracking parts or keypoints [11, 21]. In this paper we build on a recent state-of-the-art convolutional network for sliding-window detection [10] to provide detection and rough pose estimation in a single shot, without intermediate stages of detecting parts or initial bounding boxes. While not the first system to treat pose estimation as a categorization problem, this is the first attempt to combine detection and pose estimation at the same level using a deep learning approach. The key to the architecture is a deep convolutional network where scores for the presence of an object category, the offset for its location, and the approximate pose are all estimated on a regular grid of locations in the image. The resulting system is as accurate as recent work on pose estimation (42.4% 8 View mAVP on Pascal 3D+ [21] ) and significantly faster (46 frames per second (FPS) on a TITAN X GPU). This approach to detection and rough pose estimation is fast and accurate enough to be widely applied as a pre-processing step for tasks including high-accuracy pose estimation, object tracking and localization, and vSLAM.",
"title": ""
},
{
"docid": "93396325751b44a328b262ca34b40ee5",
"text": "Compression enables us to shift resource bottlenecks between IO and CPU. In modern datacenters, where energy efficiency is a growing concern, the benefits of using compression have not been completely exploited. As MapReduce represents a common computation framework for Internet datacenters, we develop a decision algorithm that helps MapReduce users identify when and where to use compression. For some jobs, using compression gives energy savings of up to 60%. We believe our findings will provide signficant impact on improving datacenter energy efficiency.",
"title": ""
},
{
"docid": "e47574d102d581f3bc4a8b4ea304d216",
"text": "The paper proposes a high torque density design for variable flux machines with Alnico magnets. The proposed design uses tangentially magnetized magnets in order to achieve high air gap flux density and to avoid demagnetization by the armature field. Barriers are also inserted in the rotor to limit the armature flux and to allow the machine to utilize both reluctance and magnet torque components. An analytical procedure is first applied to obtain the initial machine design parameters. Then several modifications are applied to the stator and rotor designs through finite element simulations (FEA) in order to improve the machine efficiency and torque density.",
"title": ""
},
{
"docid": "66b912a52197a98134d911ee236ac869",
"text": "We survey the field of quantum information theory. In particular, we discuss the fundamentals of the field, source coding, quantum error-correcting codes, capacities of quantum channels, measures of entanglement, and quantum cryptography.",
"title": ""
},
{
"docid": "0279c0857b6dfdba5c3fe51137c1f824",
"text": "INTRODUCTION. When deciding about the cause underlying serially presented events, patients with delusions utilise fewer events than controls, showing a \"Jumping-to-Conclusions\" bias. This has been widely hypothesised to be because patients expect to incur higher costs if they sample more information. This hypothesis is, however, unconfirmed. METHODS. The hypothesis was tested by analysing patient and control data using two models. The models provided explicit, quantitative variables characterising decision making. One model was based on calculating the potential costs of making a decision; the other compared a measure of certainty to a fixed threshold. RESULTS. Differences between paranoid participants and controls were found, but not in the way that was previously hypothesised. A greater \"noise\" in decision making (relative to the effective motivation to get the task right), rather than greater perceived costs, best accounted for group differences. Paranoid participants also deviated from ideal Bayesian reasoning more than healthy controls. CONCLUSIONS. The Jumping-to-Conclusions Bias is unlikely to be due to an overestimation of the cost of gathering more information. The analytic approach we used, involving a Bayesian model to estimate the parameters characterising different participant populations, is well suited to testing hypotheses regarding \"hidden\" variables underpinning observed behaviours.",
"title": ""
},
{
"docid": "e8c05d0540f384d0e85b4610c4bad4ef",
"text": "We present DIALDroid, a scalable and accurate tool for analyzing inter-app Inter-Component Communication (ICC) among Android apps, which outperforms current stateof-the-art ICC analysis tools. Using DIALDroid, we performed the first large-scale detection of collusive and vulnerable apps based on inter-app ICC data flows among 110,150 real-world apps and identified key security insights.",
"title": ""
},
{
"docid": "444364c2ab97bef660ab322420fc5158",
"text": "We present a telerobotics research platform that provides complete access to all levels of control via open-source electronics and software. The electronics employs an FPGA to enable a centralized computation and distributed I/O architecture in which all control computations are implemented in a familiar development environment (Linux PC) and low-latency I/O is performed over an IEEE-1394a (FireWire) bus at speeds up to 400 Mbits/sec. The mechanical components are obtained from retired first-generation da Vinci ® Surgical Systems. This system is currently installed at 11 research institutions, with additional installations underway, thereby creating a research community around a common open-source hardware and software platform.",
"title": ""
},
{
"docid": "7d860b431f44d42572fc0787bf452575",
"text": "Time-of-flight (TOF) measurement capability promises to improve PET image quality. We characterized the physical and clinical PET performance of the first Biograph mCT TOF PET/CT scanner (Siemens Medical Solutions USA, Inc.) in comparison with its predecessor, the Biograph TruePoint TrueV. In particular, we defined the improvements with TOF. The physical performance was evaluated according to the National Electrical Manufacturers Association (NEMA) NU 2-2007 standard with additional measurements to specifically address the TOF capability. Patient data were analyzed to obtain the clinical performance of the scanner. As expected for the same size crystal detectors, a similar spatial resolution was measured on the mCT as on the TruePoint TrueV. The mCT demonstrated modestly higher sensitivity (increase by 19.7 ± 2.8%) and peak noise equivalent count rate (NECR) (increase by 15.5 ± 5.7%) with similar scatter fractions. The energy, time and spatial resolutions for a varying single count rate of up to 55 Mcps resulted in 11.5 ± 0.2% (FWHM), 527.5 ± 4.9 ps (FWHM) and 4.1 ± 0.0 mm (FWHM), respectively. With the addition of TOF, the mCT also produced substantially higher image contrast recovery and signal-to-noise ratios in a clinically-relevant phantom geometry. The benefits of TOF were clearly demonstrated in representative patient images.",
"title": ""
},
{
"docid": "31895a2de8b3d9b2c2a3d29e0d7b7f58",
"text": "BACKGROUND\nReducing delays for patients who are safe to be discharged is important for minimising complications, managing costs and improving quality. Barriers to discharge include placement, multispecialty coordination of care and ineffective communication. There are a few recent studies that describe barriers from the perspective of all members of the multidisciplinary team.\n\n\nSTUDY OBJECTIVE\nTo identify the barriers to discharge for patients from our medicine service who had a discharge delay of over 24 hours.\n\n\nMETHODOLOGY\nWe developed and implemented a biweekly survey that was reviewed with attending physicians on each of the five medicine services to identify patients with an unnecessary delay. Separately, we conducted interviews with staff members involved in the discharge process to identify common barriers they observed on the wards.\n\n\nRESULTS\nOver the study period from 28 October to 22 November 2013, out of 259 total discharges, 87 patients had a delay of over 24 hours (33.6%) and experienced a total of 181 barriers. The top barriers from the survey included patient readiness, prolonged wait times for procedures or results, consult recommendations and facility placement. A total of 20 interviews were conducted, from which the top barriers included communication both between staff members and with the patient, timely notification of discharge and lack of discharge standardisation.\n\n\nCONCLUSIONS\nThere are a number of frequent barriers to discharge encountered in our hospital that may be avoidable with planning, effective communication methods, more timely preparation and tools to standardise the discharge process.",
"title": ""
},
{
"docid": "1095311bbe710412173f33134c7d47d4",
"text": "A large number of papers are appearing in the biomedical engineering literature that describe the use of machine learning techniques to develop classifiers for detection or diagnosis of disease. However, the usefulness of this approach in developing clinically validated diagnostic techniques so far has been limited and the methods are prone to overfitting and other problems which may not be immediately apparent to the investigators. This commentary is intended to help sensitize investigators as well as readers and reviewers of papers to some potential pitfalls in the development of classifiers, and suggests steps that researchers can take to help avoid these problems. Building classifiers should be viewed not simply as an add-on statistical analysis, but as part and parcel of the experimental process. Validation of classifiers for diagnostic applications should be considered as part of a much larger process of establishing the clinical validity of the diagnostic technique.",
"title": ""
},
{
"docid": "dd6711fdcf9e2908f65258c2948a1e26",
"text": "Pevzner and Sze [23] considered a precise version of the motif discovery problem and simultaneously issued an algorithmic challenge: find a motif M of length 15, where each planted instance differs from M in 4 positions. Whereas previous algorithms all failed to solve this (15,4)-motif problem. Pevzner and Sze introduced algorithms that succeeded. However, their algorithms failed to solve the considerably more difficult (14,4)-, (16,5)-, and (18,6)-motif problems.\nWe introduce a novel motif discovery algorithm based on the use of random projections of the input's substrings. Experiments on simulated data demonstrate that this algorithm performs better than existing algorithms and, in particular, typically solves the difficult (14,4)-, (16,5)-, and (18,6)-motif problems quite efficiently. A probabilistic estimate shows that the small values of d for which the algorithm fails to recover the planted (l, d)-motif are in all likelihood inherently impossible to solve. We also present experimental results on realistic biological data by identifying ribosome binding sites in prokaryotes as well as a number of known transcriptional regulatory motifs in eukaryotes.",
"title": ""
}
] |
scidocsrr
|
ee3980d847bd4096df0981ec2a67d508
|
Recent Developments in Indian Sign Language Recognition : An Analysis
|
[
{
"docid": "f267b329f52628d3c52a8f618485ae95",
"text": "We present an approach to continuous American Sign Language (ASL) recognition, which uses as input three-dimensional data of arm motions. We use computer vision methods for three-dimensional object shape and motion parameter extraction and an Ascension Technologies Flock of Birds interchangeably to obtain accurate three-dimensional movement parameters of ASL sentences, selected from a 53-sign vocabulary and a widely varied sentence structure. These parameters are used as features for Hidden Markov Models (HMMs). To address coarticulation effects and improve our recognition results, we experimented with two different approaches. The first consists of training context-dependent HMMs and is inspired by speech recognition systems. The second consists of modeling transient movements between signs and is inspired by the characteristics of ASL phonology. Our experiments verified that the second approach yields better recognition results.",
"title": ""
}
] |
[
{
"docid": "ec0bc85d241f71f5511b54f107987e5a",
"text": "We present a new deep learning approach to pose-guided resynthesis of human photographs. At the heart of the new approach is the estimation of the complete body surface texture based on a single photograph. Since the input photograph always observes only a part of the surface, we suggest a new inpainting method that completes the texture of the human body. Rather than working directly with colors of texture elements, the inpainting network estimates an appropriate source location in the input image for each element of the body surface. This correspondence field between the input image and the texture is then further warped into the target image coordinate frame based on the desired pose, effectively establishing the correspondence between the source and the target view even when the pose change is drastic. The final convolutional network then uses the established correspondence and all other available information to synthesize the output image using a fully-convolutional architecture with deformable convolutions. We show stateof-the-art result for pose-guided image synthesis. Additionally, we demonstrate the performance of our system for garment transfer and pose-guided face resynthesis.",
"title": ""
},
{
"docid": "9364e07801fc01e50d0598b61ab642aa",
"text": "Online learning represents a family of machine learning methods, where a learner attempts to tackle some predictive (or any type of decision-making) task by learning from a sequence of data instances one by one at each time. The goal of online learning is to maximize the accuracy/correctness for the sequence of predictions/decisions made by the online learner given the knowledge of correct answers to previous prediction/learning tasks and possibly additional information. This is in contrast to traditional batch or offline machine learning methods that are often designed to learn a model from the entire training data set at once. Online learning has become a promising technique for learning from continuous streams of data in many real-world applications. This survey aims to provide a comprehensive survey of the online machine learning literature through a systematic review of basic ideas and key principles and a proper categorization of different algorithms and techniques. Generally speaking, according to the types of learning tasks and the forms of feedback information, the existing online learning works can be classified into three major categories: (i) online supervised learning where full feedback information is always available, (ii) online learning with limited feedback, and (iii) online unsupervised learning where no feedback is available. Due to space limitation, the survey will be mainly focused on the first category, but also briefly cover some basics of the other two categories. Finally, we also discuss some open issues and attempt to shed light on potential future research directions in this field.",
"title": ""
},
{
"docid": "92ec1f93124ddfa1faa1d7a3ab371935",
"text": "We introduce a novel evolutionary algorithm (EA) with a semantic network-based representation. For enabling this, we establish new formulations of EA variation operators, crossover and mutation, that we adapt to work on semantic networks. The algorithm employs commonsense reasoning to ensure all operations preserve the meaningfulness of the networks, using ConceptNet and WordNet knowledge bases. The algorithm can be classified as a novel memetic algorithm (MA), given that (1) individuals represent pieces of information that undergo evolution, as in the original sense of memetics as it was introduced by Dawkins; and (2) this is different from existing MA, where the word “memetic” has been used as a synonym for local refinement after global optimization. For evaluating the approach, we introduce an analogical similarity-based fitness measure that is computed through structure mapping. This setup enables the open-ended generation of networks analogous to a given base network.",
"title": ""
},
{
"docid": "ac6410d8891491d050b32619dc2bdd50",
"text": "Due to the increase of generation sources in distribution networks, it is becoming very complex to develop and maintain models of these networks. Network operators need to determine reduced models of distribution networks to be used in grid management functions. This paper presents a novel method that synthesizes steady-state models of unbalanced active distribution networks with the use of dynamic measurements (time series) from phasor measurement units (PMUs). Since phasor measurement unit (PMU) measurements may contain errors and bad data, this paper presents the application of a Kalman filter technique for real-time data processing. In addition, PMU data capture the power system's response at different time-scales, which are generated by different types of power system events; the presented Kalman filter has been improved to extract the steady-state component of the PMU measurements to be fed to the steady-state model synthesis application. Performance of the proposed methods has been assessed by real-time hardware-in-the-loop simulations on a sample distribution network.",
"title": ""
},
{
"docid": "e45ba62d473dd5926e6efa2778e567ca",
"text": "This contribution introduces a novel approach to cross-calibrate automotive vision and ranging sensors. The resulting sensor alignment allows the incorporation of multiple sensor data into a detection and tracking framework. Exemplarily, we show how a realtime vehicle detection system, intended for emergency breaking or ACC applications, benefits from the low level fusion of multibeam lidar and vision sensor measurements in discrimination performance and computational complexity",
"title": ""
},
{
"docid": "7012a0cb7b0270b43d733cf96ab33bac",
"text": "The evolution of Cloud computing makes the major changes in computing world as with the assistance of basic cloud computing service models like SaaS, PaaS, and IaaS an organization achieves their business goal with minimum effort as compared to traditional computing environment. On the other hand security of the data in the cloud database server is the key area of concern in the acceptance of cloud. It requires a very high degree of privacy and authentication. To protect the data in cloud database server cryptography is one of the important methods. Cryptography provides various symmetric and asymmetric algorithms to secure the data. This paper presents the symmetric cryptographic algorithm named as AES (Advanced Encryption Standard). It is based on several substitutions, permutation and transformation.",
"title": ""
},
{
"docid": "c47f53874e92c3d036538509f57d0020",
"text": "Cyber-attacks against Smart Grids have been found in the real world. Malware such as Havex and BlackEnergy have been found targeting industrial control systems (ICS) and researchers have shown that cyber-attacks can exploit vulnerabilities in widely used Smart Grid communication standards. This paper addresses a deep investigation of attacks against the manufacturing message specification of IEC 61850, which is expected to become one of the most widely used communication services in Smart Grids. We investigate how an attacker can build a custom tool to execute man-in-the-middle attacks, manipulate data, and affect the physical system. Attack capabilities are demonstrated based on NESCOR scenarios to make it possible to thoroughly test these scenarios in a real system. The goal is to help understand the potential for such attacks, and to aid the development and testing of cyber security solutions. An attack use-case is presented that focuses on the standard for power utility automation, IEC 61850 in the context of inverter-based distributed energy resource devices; especially photovoltaics (PV) generators.",
"title": ""
},
{
"docid": "61f2ffb9f8f54c71e30d5d19580ce54c",
"text": "Galleries, Libraries, Archives and Museums (short: GLAMs) around the globe are beginning to explore the potential of crowdsourcing, i. e. outsourcing specific activities to a community though an open call. In this paper, we propose a typology of these activities, based on an empirical study of a substantial amount of projects initiated by relevant cultural heritage institutions. We use the Digital Content Life Cycle model to study the relation between the different types of crowdsourcing and the core activities of heritage organizations. Finally, we focus on two critical challenges that will define the success of these collaborations between amateurs and professionals: (1) finding sufficient knowledgeable, and loyal users; (2) maintaining a reasonable level of quality. We thus show the path towards a more open, connected and smart cultural heritage: open (the data is open, shared and accessible), connected (the use of linked data allows for interoperable infrastructures, with users and providers getting more and more connected), and smart (the use of knowledge and web technologies allows us to provide interesting data to the right users, in the right context, anytime, anywhere -- both with involved users/consumers and providers). It leads to a future cultural heritage that is open, has intelligent infrastructures and has involved users, consumers and providers.",
"title": ""
},
{
"docid": "c9577cd328841c1574e18677c00731d0",
"text": "Topic modeling refers to the task of discovering the underlying thematic structure in a text corpus, where the output is commonly presented as a report of the top terms appearing in each topic. Despite the diversity of topic modeling algorithms that have been proposed, a common challenge in successfully applying these techniques is the selection of an appropriate number of topics for a given corpus. Choosing too few topics will produce results that are overly broad, while choosing too many will result in the“over-clustering” of a corpus into many small, highly-similar topics. In this paper, we propose a term-centric stability analysis strategy to address this issue, the idea being that a model with an appropriate number of topics will be more robust to perturbations in the data. Using a topic modeling approach based on matrix factorization, evaluations performed on a range of corpora show that this strategy can successfully guide the model selection process.",
"title": ""
},
{
"docid": "8ef51eeb7705a1369103a36f60268414",
"text": "Cloud computing is a new way of delivering computing resources, not a new technology. Computing services ranging from data storage and processing to software, such as email handling, are now available instantly, commitment-free and on-demand. Since we are in a time of belt-tightening, this new economic model for computing has found fertile ground and is seeing massive global investment. According to IDC’s analysis, the worldwide forecast for cloud services in 2009 will be in the order of $17.4bn. The estimation for 2013 amounts to $44.2bn, with the European market ranging from €971m in 2008 to €6,005m in 2013 .",
"title": ""
},
{
"docid": "87c3f3ab2c5c1e9a556ed6f467f613a9",
"text": "In this study, we apply learning-to-rank algorithms to design trading strategies using relative performance of a group of stocks based on investors’ sentiment toward these stocks. We show that learning-to-rank algorithms are effective in producing reliable rankings of the best and the worst performing stocks based on investors’ sentiment. More specifically, we use the sentiment shock and trend indicators introduced in the previous studies, and we design stock selection rules of holding long positions of the top 25% stocks and short positions of the bottom 25% stocks according to rankings produced by learning-to-rank algorithms. We then apply two learning-to-rank algorithms, ListNet and RankNet, in stock selection processes and test long-only and long-short portfolio selection strategies using 10 years of market and news sentiment data. Through backtesting of these strategies from 2006 to 2014, we demonstrate that our portfolio strategies produce risk-adjusted returns superior to the S&P500 index return, the hedge fund industry average performance HFRIEMN, and some sentiment-based approaches without learning-to-rank algorithm during the same period.",
"title": ""
},
{
"docid": "ebfd9accf86d5908c12d2c3b92758620",
"text": "A circular microstrip array with beam focused for RFID applications was presented. An analogy with the optical lens and optical diffraction was made to describe the behaviour of this system. The circular configuration of the array requires less phase shift and exhibite smaller side lobe level compare to a square array. The measurement result shows a good agreement with simulation and theory. This system is a good way to increase the efficiency of RFID communication without useless power. This solution could also be used to develop RFID devices for the localization problematic. The next step of this work is to design a system with an adjustable focus length.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "3f053da8cc3faa5d0dd9e0f501d751d2",
"text": "An automatic classification system of the music genres is proposed. Based on the timbre features such as mel-frequency cepstral coefficients, the spectro-temporal features are obtained to capture the temporal evolution and variation of the spectral characteristics of the music signal. Mean, variance, minimum, and maximum values of the timbre features are calculated. Modulation spectral flatness, crest, contrast, and valley are estimated for both original spectra and timbre-feature vectors. A support vector machine (SVM) is used as a classifier where an elaborated kernel function is defined. To reduce the computational complexity, an SVM ranker is applied for feature selection. Compared with the best algorithms submitted to the music information retrieval evaluation exchange (MIREX) contests, the proposed method provides higher accuracy at a lower feature dimension for the GTZAN and ISMIR2004 databases.",
"title": ""
},
{
"docid": "784d75662234e45f78426c690356d872",
"text": "Chinese-English parallel corpora are key resources for Chinese-English cross-language information processing, Chinese-English bilingual lexicography, Chinese-English language research and teaching. But so far large-scale Chinese-English corpus is still unavailable yet, given the difficulties and the intensive labours required. In this paper, our work towards building a large-scale Chinese-English parallel corpus is presented. We elaborate on the collection, annotation and mark-up of the parallel Chinese-English texts and the workflow that we used to construct the corpus. In addition, we also present our work toward building tools for constructing and using the corpus easily for different purposes. Among these tools, a parallel concordance tool developed by us is examined in detail. Several applications of the corpus being conducted are also introduced briefly in the paper.",
"title": ""
},
{
"docid": "3e6dbaf4ef18449c82e29e878fa9a8c5",
"text": "The description of a software architecture style must include the structural model of the components and their interactions, the laws governing the dynamic changes in the architecture, and the communication pattern. In our work we represent a system as a graph where hyperedges are components and nodes are ports of communication. The construction and dynamic evolut,ion of the style will be represented as context-free productions and graph rewriting. To model the evolution of the system we propose to use techniques of constraint solving. From this approach we obtain an intuitive way to model systems with nice characteristics for the description of dynamic architectures and reconfiguration and, a unique language to describe the style, model the evolution of the system and prove properties.",
"title": ""
},
{
"docid": "71706d4e580e0e5ba3f1e025e6a3b33e",
"text": "We report a novel paper-based biobattery which generates power from microorganism-containing liquid derived from renewable and sustainable wastewater which is readily accessible in the local environment. The device fuses the art of origami and the technology of microbial fuel cells (MFCs) and has the potential to shift the paradigm for flexible and stackable paper-based batteries by enabling exceptional electrical characteristics and functionalities. 3D, modular, and retractable battery stack is created from (i) 2D paper sheets through high degrees of folding and (ii) multifunctional layers sandwiched for MFC device configuration. The stack is based on ninja star-shaped origami design formed by eight MFC modular blades, which is retractable from sharp shuriken (closed) to round frisbee (opened). The microorganism-containing wastewater is added into an inlet of the closed battery stack and it is transported into each MFC module through patterned fluidic pathways in the paper layers. During operation, the battery stack is transformed into the round frisbee to connect eight MFC modules in series for improving the power output and simultaneously expose all air-cathodes to the air for their cathodic reactions. The device generates desired values of electrical current and potential for powering an LED for more than 20min.",
"title": ""
},
{
"docid": "4770ca4d8c12f949b8010e286d147802",
"text": "The clinical, radiologic, and pathologic findings in radiation injury of the brain are reviewed. Late radiation injury is the major, dose-limiting complication of brain irradiation and occurs in two forms, focal and diffuse, which differ significantly in clinical and radiologic features. Focal and diffuse injuries both include a wide spectrum of abnormalities, from subclinical changes detectable only by MR imaging to overt brain necrosis. Asymptomatic focal edema is commonly seen on CT and MR following focal or large-volume irradiation. Focal necrosis has the CT and MR characteristics of a mass lesion, with clinical evidence of focal neurologic abnormality and raised intracranial pressure. Microscopically, the lesion shows characteristic vascular changes and white matter pathology ranging from demyelination to coagulative necrosis. Diffuse radiation injury is characterized by periventricular decrease in attenuation of CT and increased signal on proton-density and T2-weighted MR images. Most patients are asymptomatic. When clinical manifestations occur, impairment of mental function is the most prominent feature. Pathologic findings in focal and diffuse radiation necrosis are similar. Necrotizing leukoencephalopathy is the form of diffuse white matter injury that follows chemotherapy, with or without irradiation. Vascular disease is less prominent and the latent period is shorter than in diffuse radiation injury; radiologic findings and clinical manifestations are similar. Late radiation injury of large arteries is an occasional cause of postradiation cerebral injury, and cerebral atrophy and mineralizing microangiopathy are common radiologic findings of uncertain clinical significance. Functional imaging by positron emission tomography can differentiate recurrent tumor from focal radiation necrosis with positive and negative predictive values for tumor of 80-90%. Positron emission tomography of the blood-brain barrier, glucose metabolism, and blood flow, together with MR imaging, have demonstrated some of the pathophsiology of late radiation necrosis. Focal glucose hypometabolism on positron emissin tomography in irradiated patients may have prognostic significance for subsequent development of clinically evident radiation necrosis.",
"title": ""
},
{
"docid": "0c5ebaaf0fd85312428b5d6b7479bfb6",
"text": "BACKGROUND\nPovidone-iodine solution is an antiseptic that is used worldwide as surgical paint and is considered to have a low irritant potential. Post-surgical severe irritant dermatitis has been described after the misuse of this antiseptic in the surgical setting.\n\n\nMETHODS\nBetween January 2011 and June 2013, 27 consecutive patients with post-surgical contact dermatitis localized outside of the surgical incision area were evaluated. Thirteen patients were also available for patch testing.\n\n\nRESULTS\nAll patients developed dermatitis the day after the surgical procedure. Povidone-iodine solution was the only liquid in contact with the skin of our patients. Most typical lesions were distributed in a double lumbar parallel pattern, but they were also found in a random pattern or in areas where a protective pad or an occlusive medical device was glued to the skin. The patch test results with povidone-iodine were negative.\n\n\nCONCLUSIONS\nPovidone-iodine-induced post-surgical dermatitis may be a severe complication after prolonged surgical procedures. As stated in the literature and based on the observation that povidone-iodine-induced contact irritant dermatitis occurred in areas of pooling or occlusion, we speculate that povidone-iodine together with occlusion were the causes of the dermatitis epidemic that occurred in our surgical setting. Povidone-iodine dermatitis is a problem that is easily preventable through the implementation of minimal routine changes to adequately dry the solution in contact with the skin.",
"title": ""
}
] |
scidocsrr
|
e70cedd385532e99fbddfbf98fdb5494
|
Variations in cognitive maps: understanding individual differences in navigation.
|
[
{
"docid": "f13ffbb31eedcf46df1aaecfbdf61be9",
"text": "Finding one's way in a large-scale environment may engage different cognitive processes than following a familiar route. The neural bases of these processes were investigated using functional MRI (fMRI). Subjects found their way in one virtual-reality town and followed a well-learned route in another. In a control condition, subjects followed a visible trail. Within subjects, accurate wayfinding activated the right posterior hippocampus. Between-subjects correlations with performance showed that good navigators (i.e., accurate wayfinders) activated the anterior hippocampus during wayfinding and head of caudate during route following. These results coincide with neurophysiological evidence for distinct response (caudate) and place (hippocampal) representations supporting navigation. We argue that the type of representation used influences both performance and concomitant fMRI activation patterns.",
"title": ""
}
] |
[
{
"docid": "afd6d41c0985372a88ff3bb6f91ce5b5",
"text": "Following your need to always fulfil the inspiration to obtain everybody is now simple. Connecting to the internet is one of the short cuts to do. There are so many sources that offer and connect us to other world condition. As one of the products to see in internet, this website becomes a very available place to look for countless ggplot2 elegant graphics for data analysis sources. Yeah, sources about the books from countries in the world are provided.",
"title": ""
},
{
"docid": "7ca2bde925c4bb322fe266b4a9de5004",
"text": "Nuclear transfer of an oocyte into the cytoplasm of another enucleated oocyte has shown that embryogenesis and implantation are influenced by cytoplasmic factors. We report a case of a 30-year-old nulligravida woman who had two failed IVF cycles characterized by all her embryos arresting at the two-cell stage and ultimately had pronuclear transfer using donor oocytes. After her third IVF cycle, eight out of 12 patient oocytes and 12 out of 15 donor oocytes were fertilized. The patient's pronuclei were transferred subzonally into an enucleated donor cytoplasm resulting in seven reconstructed zygotes. Five viable reconstructed embryos were transferred into the patient's uterus resulting in a triplet pregnancy with fetal heartbeats, normal karyotypes and nuclear genetic fingerprinting matching the mother's genetic fingerprinting. Fetal mitochondrial DNA profiles were identical to those from donor cytoplasm with no detection of patient's mitochondrial DNA. This report suggests that a potentially viable pregnancy with normal karyotype can be achieved through pronuclear transfer. Ongoing work to establish the efficacy and safety of pronuclear transfer will result in its use as an aid for human reproduction.",
"title": ""
},
{
"docid": "c9b7832cd306fc022e4a376f10ee8fc8",
"text": "This paper describes a study to assess the influence of a variety of factors on reported level of presence in immersive virtual environments. It introduces the idea of stacking depth, that is, where a participant can simulate the process of entering the virtual environment while already in such an environment, which can be repeated to several levels of depth. An experimental study including 24 subjects was carried out. Half of the subjects were transported between environments by using virtual head-mounted displays, and the other half by going through doors. Three other binary factors were whether or not gravity operated, whether or not the subject experienced a virtual precipice, and whether or not the subject was followed around by a virtual actor. Visual, auditory, and kinesthetic representation systems and egocentric/exocentric perceptual positions were assessed by a preexperiment questionnaire. Presence was assessed by the subjects as their sense of being there, the extent to which they experienced the virtual environments as more the presenting reality than the real world in which the experiment was taking place, and the extent to which the subject experienced the virtual environments as places visited rather than images seen. A logistic regression analysis revealed that subjective reporting of presence was significantly positively associated with visual and kinesthetic representation systems, and negatively with the auditory system. This was not surprising since the virtual reality system used was primarily visual. The analysis also showed a significant and positive association with stacking level depth for those who were transported between environments by using the virtual HMD, and a negative association for those who were transported through doors. Finally, four of the subjects moved their real left arm to match movement of the left arm of the virtual body displayed by the system. These four scored significantly higher on the kinesthetic representation system than the remainder of the subjects.",
"title": ""
},
{
"docid": "d055bc9f5c7feb9712ec72f8050f5fd8",
"text": "An intelligent observer looks at the world and sees not only what is, but what is moving and what can be moved. In other words, the observer sees how the present state of the world can transform in the future. We propose a model that predicts future images by learning to represent the present state and its transformation given only a sequence of images. To do so, we introduce an architecture with a latent state composed of two components designed to capture (i) the present image state and (ii) the transformation between present and future states, respectively. We couple this latent state with a recurrent neural network (RNN) core that predicts future frames by transforming past states into future states by applying the accumulated state transformation with a learned operator. We describe how this model can be integrated into an encoder-decoder convolutional neural network (CNN) architecture that uses weighted residual connections to integrate representations of the past with representations of the future. Qualitatively, our approach generates image sequences that are stable and capture realistic motion over multiple predicted frames, without requiring adversarial training. Quantitatively, our method achieves prediction results comparable to state-of-the-art results on standard image prediction benchmarks (Moving MNIST, KTH, and UCF101).",
"title": ""
},
{
"docid": "0ff1837d40bbd6bbfe4f5ec69f83de90",
"text": "Nowadays, Telemarketing is an interactive technique of direct marketing that many banks apply to present a long term deposit to bank customers via the phone. Although the offering like this manner is powerful, it may make the customers annoyed. The data prediction is a popular task in data mining because it can be applied to solve this problem. However, the predictive performance may be decreased in case of the input data have many features like the bank customer information. In this paper, we focus on how to reduce the feature of input data and balance the training set for the predictive model to help the bank to increase the prediction rate. In the system performance evaluation, all accuracy rates of each predictive model based on the proposed approach compared with the original predictive model based on the truth positive and receiver operating characteristic measurement show the high performance in which the smaller number of features.",
"title": ""
},
{
"docid": "dc8af68ed9bbfd8e24c438771ca1d376",
"text": "Pedestrian detection has progressed significantly in the last years. However, occluded people are notoriously hard to detect, as their appearance varies substantially depending on a wide range of occlusion patterns. In this paper, we aim to propose a simple and compact method based on the FasterRCNN architecture for occluded pedestrian detection. We start with interpreting CNN channel features of a pedestrian detector, and we find that different channels activate responses for different body parts respectively. These findings motivate us to employ an attention mechanism across channels to represent various occlusion patterns in one single model, as each occlusion pattern can be formulated as some specific combination of body parts. Therefore, an attention network with self or external guidances is proposed as an add-on to the baseline FasterRCNN detector. When evaluating on the heavy occlusion subset, we achieve a significant improvement of 8pp to the baseline FasterRCNN detector on CityPersons and on Caltech we outperform the state-of-the-art method by 4pp.",
"title": ""
},
{
"docid": "6bc94f9b5eb90ba679964cf2a7df4de4",
"text": "New high-frequency data collection technologies and machine learning analysis techniques could offer new insights into learning, especially in tasks in which students have ample space to generate unique, personalized artifacts, such as a computer program, a robot, or a solution to an engineering challenge. To date most of the work on learning analytics and educational data mining has focused on online courses or cognitive tutors, in which the tasks are more structured and the entirety of interaction happens in front of a computer. In this paper, I argue that multimodal learning analytics could offer new insights into students' learning trajectories, and present several examples of this work and its educational application.",
"title": ""
},
{
"docid": "04a7039b3069816b157e5ca0f7541b94",
"text": "Visible light communication is innovative and active technique in modern digital wireless communication. In this paper, we describe a new innovative vlc system which having a better performance and efficiency to other previous system. Visible light communication (VLC) is an efficient technology in order to the improve the speed and the robustness of the communication link in indoor optical wireless communication system. In order to achieve high data rate in communication for VLC system, multiple input multiple output (MIMO) with OFDM is a feasible option. However, the contemporary MIMO with OFDM VLC system are lacks from the diversity and the experiences performance variation through different optical channels. This is mostly because of characteristics of optical elements used for making the receiver. In this paper, we analyze the imaging diversity in MIMO with OFDM VLC system. Simulation results are shown diversity achieved in the different cases.",
"title": ""
},
{
"docid": "53d04c06efb468e14e2ee0b485caf66f",
"text": "The analysis of time-oriented data is an important task in many application scenarios. In recent years, a variety of techniques for visualizing such data have been published. This variety makes it difficult for prospective users to select methods or tools that are useful for their particular task at hand. In this article, we develop and discuss a systematic view on the diversity of methods for visualizing time-oriented data. With the proposed categorization we try to untangle the visualization of time-oriented data, which is such an important concern in Visual Analytics. The categorization is not only helpful for users, but also for researchers to identify future tasks in Visual Analytics. r 2007 Elsevier Ltd. All rights reserved. MSC: primary 68U05; 68U35",
"title": ""
},
{
"docid": "c49d4b1f2ac185bcb070cb105798417a",
"text": "The performance of face detection has been largely improved with the development of convolutional neural network. However, the occlusion issue due to mask and sunglasses, is still a challenging problem. The improvement on the recall of these occluded cases usually brings the risk of high false positives. In this paper, we present a novel face detector called Face Attention Network (FAN), which can significantly improve the recall of the face detection problem in the occluded case without compromising the speed. More specifically, we propose a new anchor-level attention, which will highlight the features from the face region. Integrated with our anchor assign strategy and data augmentation techniques, we obtain state-of-art results on public face ∗Equal contribution. †Work was done during an internship at Megvii Research. detection benchmarks like WiderFace and MAFA. The code will be released for reproduction.",
"title": ""
},
{
"docid": "11d1978a3405f63829e02ccb73dcd75f",
"text": "The performance of two commercial simulation codes, Ansys Fluent and Comsol Multiphysics, is thoroughly examined for a recently established two-phase flow benchmark test case. In addition, the commercial codes are directly compared with the newly developed academic code, FeatFlow TP2D. The results from this study show that the commercial codes fail to converge and produce accurate results, and leave much to be desired with respect to direct numerical simulation of flows with free interfaces. The academic code on the other hand was shown to be computationally efficient, produced very accurate results, and outperformed the commercial codes by a magnitude or more.",
"title": ""
},
{
"docid": "8cd666c0796c0fe764bc8de0d7a20fa3",
"text": "$$\\mathcal{Q}$$ -learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. This paper presents and proves in detail a convergence theorem for $$\\mathcal{Q}$$ -learning based on that outlined in Watkins (1989). We show that $$\\mathcal{Q}$$ -learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many $$\\mathcal{Q}$$ values can be changed each iteration, rather than just one.",
"title": ""
},
{
"docid": "8f183ac262aac98c563bf9dcc69b1bf5",
"text": "Functional infrared thermal imaging (fITI) is considered a promising method to measure emotional autonomic responses through facial cutaneous thermal variations. However, the facial thermal response to emotions still needs to be investigated within the framework of the dimensional approach to emotions. The main aim of this study was to assess how the facial thermal variations index the emotional arousal and valence dimensions of visual stimuli. Twenty-four participants were presented with three groups of standardized emotional pictures (unpleasant, neutral and pleasant) from the International Affective Picture System. Facial temperature was recorded at the nose tip, an important region of interest for facial thermal variations, and compared to electrodermal responses, a robust index of emotional arousal. Both types of responses were also compared to subjective ratings of pictures. An emotional arousal effect was found on the amplitude and latency of thermal responses and on the amplitude and frequency of electrodermal responses. The participants showed greater thermal and dermal responses to emotional than to neutral pictures with no difference between pleasant and unpleasant ones. Thermal responses correlated and the dermal ones tended to correlate with subjective ratings. Finally, in the emotional conditions compared to the neutral one, the frequency of simultaneous thermal and dermal responses increased while both thermal or dermal isolated responses decreased. Overall, this study brings convergent arguments to consider fITI as a promising method reflecting the arousal dimension of emotional stimulation and, consequently, as a credible alternative to the classical recording of electrodermal activity. The present research provides an original way to unveil autonomic implication in emotional processes and opens new perspectives to measure them in touchless conditions.",
"title": ""
},
{
"docid": "2eb344b6701139be184624307a617c1b",
"text": "This work combines the central ideas from two different areas, crowd simulation and social network analysis, to tackle some existing problems in both areas from a new angle. We present a novel spatio-temporal social crowd simulation framework, Social Flocks, to revisit three essential research problems, (a) generation of social networks, (b) community detection in social networks, (c) modeling collective social behaviors in crowd simulation. Our framework produces social networks that satisfy the properties of high clustering coefficient, low average path length, and power-law degree distribution. It can also be exploited as a novel dynamic model for community detection. Finally our framework can be used to produce real-life collective social behaviors over crowds, including community-guided flocking, leader following, and spatio-social information propagation. Social Flocks can serve as visualization of simulated crowds for domain experts to explore the dynamic effects of the spatial, temporal, and social factors on social networks. In addition, it provides an experimental platform of collective social behaviors for social gaming and movie animations. Social Flocks demo is at http://mslab.csie.ntu.edu.tw/socialflocks/ .",
"title": ""
},
{
"docid": "b5b8ae3b7b307810e1fe39630bc96937",
"text": "Up to this point in the text we have considered the use of the logistic regression model in settings where we observe a single dichotomous response for a sample of statistically independent subjects. However, there are settings where the assumption of independence of responses may not hold for a variety of reasons. For example, consider a study of asthma in children in which subjects are interviewed bi-monthly for 1 year. At each interview the date is recorded and the mother is asked whether, during the previous 2 months, her child had an asthma attack severe enough to require medical attention, whether the child had a chest cold, and how many smokers lived in the household. The child’s age and race are recorded at the first interview. The primary outcome is the occurrence of an asthma attack. What differs here is the lack of independence in the observations due to the fact that we have six measurements on each child. In this example, each child represents a cluster of correlated observations of the outcome. The measurements of the presence or absence of a chest cold and the number of smokers residing in the household can change from observation to observation and thus are called clusterspecific or time-varying covariates. The date changes in a systematic way and is recorded to model possible seasonal effects. The child’s age and race are constant for the duration of the study and are referred to as cluster-level or time-invariant covariates. The terms clusters, subjects, cluster-specific and cluster-level covariates are general enough to describe multiple measurements on a single subject or single measurements on different but related subjects. An example of the latter setting would be a study of all children in a household. Repeated measurements on the same subject or a subject clustered in some sort of unit (household, hospital, or physician) are the two most likely scenarios leading to correlated data.",
"title": ""
},
{
"docid": "cfc5b5676552c2e90b70ed3cfa5ac022",
"text": "UNLABELLED\nWe present a first-draft digital reconstruction of the microcircuitry of somatosensory cortex of juvenile rat. The reconstruction uses cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm(3) containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses. Simulations reproduce an array of in vitro and in vivo experiments without parameter tuning. Additionally, we find a spectrum of network states with a sharp transition from synchronous to asynchronous activity, modulated by physiological mechanisms. The spectrum of network states, dynamically reconfigured around this transition, supports diverse information processing strategies.\n\n\nPAPERCLIP\nVIDEO ABSTRACT.",
"title": ""
},
{
"docid": "526e36dd9e3db50149687ea6358b4451",
"text": "A query over RDF data is usually expressed in terms of matching between a graph representing the target and a huge graph representing the source. Unfortunately, graph matching is typically performed in terms of subgraph isomorphism, which makes semantic data querying a hard problem. In this paper we illustrate a novel technique for querying RDF data in which the answers are built by combining paths of the underlying data graph that align with paths specified by the query. The approach is approximate and generates the combinations of the paths that best align with the query. We show that, in this way, the complexity of the overall process is significantly reduced and verify experimentally that our framework exhibits an excellent behavior with respect to other approaches in terms of both efficiency and effectiveness.",
"title": ""
},
{
"docid": "96d8174d188fa89c17ffa0b255584628",
"text": "In the past years we have witnessed the emergence of the new discipline of computational social science, which promotes a new data-driven and computation-based approach to social sciences. In this article we discuss how the availability of new technologies such as online social media and mobile smartphones has allowed researchers to passively collect human behavioral data at a scale and a level of granularity that were just unthinkable some years ago. We also discuss how these digital traces can then be used to prove (or disprove) existing theories and develop new models of human behavior.",
"title": ""
},
{
"docid": "1d9e03eb11328f96eaee1f70dcf2a539",
"text": "Crossbar architecture has been widely adopted in neural network accelerators due to the efficient implementations on vector-matrix multiplication operations. However, in the case of convolutional neural networks (CNNs), the efficiency is compromised dramatically because of the large amounts of data reuse. Although some mapping methods have been designed to achieve a balance between the execution throughput and resource overhead, the resource consumption cost is still huge while maintaining the throughput. Network pruning is a promising and widely studied method to shrink the model size, whereas prior work for CNNs compression rarely considered the crossbar architecture and the corresponding mapping method and cannot be directly utilized by crossbar-based neural network accelerators. This paper proposes a crossbar-aware pruning framework based on a formulated $L_{0}$ -norm constrained optimization problem. Specifically, we design an $L_{0}$ -norm constrained gradient descent with relaxant probabilistic projection to solve this problem. Two types of sparsity are successfully achieved: 1) intuitive crossbar-grain sparsity and 2) column-grain sparsity with output recombination, based on which we further propose an input feature maps reorder method to improve the model accuracy. We evaluate our crossbar-aware pruning framework on the median-scale CIFAR10 data set and the large-scale ImageNet data set with VGG and ResNet models. Our method is able to reduce the crossbar overhead by 44%–72% with insignificant accuracy degradation. This paper significantly reduce the resource overhead and the related energy cost and provides a new co-design solution for mapping CNNs onto various crossbar devices with much better efficiency.",
"title": ""
}
] |
scidocsrr
|
36f0805e18c045c98071bd2b1469cab3
|
Fake News Detection Enhancement with Data Imputation
|
[
{
"docid": "3f9c720773146d83c61cfbbada3938c4",
"text": "How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.",
"title": ""
},
{
"docid": "756b25456494b3ece9b240ba3957f91c",
"text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.",
"title": ""
}
] |
[
{
"docid": "2e731a5ec3abb71ad0bc3253e2688fbe",
"text": "Post-task ratings of difficulty in a usability test have the potential to provide diagnostic information and be an additional measure of user satisfaction. But the ratings need to be reliable as well as easy to use for both respondents and researchers. Three one-question rating types were compared in a study with 26 participants who attempted the same five tasks with two software applications. The types were a Likert scale, a Usability Magnitude Estimation (UME) judgment, and a Subjective Mental Effort Question (SMEQ). All three types could distinguish between the applications with 26 participants, but the Likert and SMEQ types were more sensitive with small sample sizes. Both the Likert and SMEQ types were easy to learn and quick to execute. The online version of the SMEQ question was highly correlated with other measures and had equal sensitivity to the Likert question type.",
"title": ""
},
{
"docid": "d5b4018422fdee8d3f4f33343f3de290",
"text": "Given a pair of handwritten documents written by different individuals, compute a document similarity score irrespective of (i) handwritten styles, (ii) word forms, word ordering and word overflow. • IIIT-HWS: Introducing a large scale synthetic corpus of handwritten word images for enabling deep architectures. • HWNet: A deep CNN architecture for state of the art handwritten word spotting in multi-writer scenarios. • MODS: Measure of document similarity score irrespective of word forms, ordering and paraphrasing of the content. • Applications in Educational Scenario: Comparing handwritten assignments, searching through instructional videos. 2. Contributions 3. Challenges",
"title": ""
},
{
"docid": "c4256017c214eabda8e5b47c604e0e49",
"text": "In this paper, a multi-band antenna for 4G wireless systems is proposed. The proposed antenna consists of a modified planar inverted-F antenna with additional branch line for wide bandwidth and a folded monopole antenna. The antenna provides wide bandwidth for covering the hepta-band LTE/GSM/UMTS operation. The measured 6-dB return loss bandwidth was 169 MHz (793 MHz-962 MHz) at the low frequency band and 1030 MHz (1700 MHz-2730 MHz) at the high frequency band. The overall dimension of the proposed antenna is 55 mm × 110 mm × 5 mm.",
"title": ""
},
{
"docid": "016f1963dc657a6148afc7f067ad7c82",
"text": "Privacy protection is a crucial problem in many biomedical signal processing applications. For this reason, particular attention has been given to the use of secure multiparty computation techniques for processing biomedical signals, whereby nontrusted parties are able to manipulate the signals although they are encrypted. This paper focuses on the development of a privacy preserving automatic diagnosis system whereby a remote server classifies a biomedical signal provided by the client without getting any information about the signal itself and the final result of the classification. Specifically, we present and compare two methods for the secure classification of electrocardiogram (ECG) signals: the former based on linear branching programs (a particular kind of decision tree) and the latter relying on neural networks. The paper deals with all the requirements and difficulties related to working with data that must stay encrypted during all the computation steps, including the necessity of working with fixed point arithmetic with no truncation while guaranteeing the same performance of a floating point implementation in the plain domain. A highly efficient version of the underlying cryptographic primitives is used, ensuring a good efficiency of the two proposed methods, from both a communication and computational complexity perspectives. The proposed systems prove that carrying out complex tasks like ECG classification in the encrypted domain efficiently is indeed possible in the semihonest model, paving the way to interesting future applications wherein privacy of signal owners is protected by applying high security standards.",
"title": ""
},
{
"docid": "412c61657893bb1ed2f579936d47dc02",
"text": "In this paper, we propose a simple yet effective method for multiple music source separation using convolutional neural networks. Stacked hourglass network, which was originally designed for human pose estimation in natural images, is applied to a music source separation task. The network learns features from a spectrogram image across multiple scales and generates masks for each music source. The estimated mask is refined as it passes over stacked hourglass modules. The proposed framework is able to separate multiple music sources using a single network. Experimental results on MIR-1K and DSD100 datasets validate that the proposed method achieves competitive results comparable to the state-of-the-art methods in multiple music source separation and singing voice separation tasks.",
"title": ""
},
{
"docid": "042fcc75e4541d27b97e8c2fe02a2ddf",
"text": "Folk medicine suggests that pomegranate (peels, seeds and leaves) has anti-inflammatory properties; however, the precise mechanisms by which this plant affects the inflammatory process remain unclear. Herein, we analyzed the anti-inflammatory properties of a hydroalcoholic extract prepared from pomegranate leaves using a rat model of lipopolysaccharide-induced acute peritonitis. Male Wistar rats were treated with either the hydroalcoholic extract, sodium diclofenac, or saline, and 1 h later received an intraperitoneal injection of lipopolysaccharides. Saline-injected animals (i. p.) were used as controls. Animals were culled 4 h after peritonitis induction, and peritoneal lavage and peripheral blood samples were collected. Serum and peritoneal lavage levels of TNF-α as well as TNF-α mRNA expression in peritoneal lavage leukocytes were quantified. Total and differential leukocyte populations were analyzed in peritoneal lavage samples. Lipopolysaccharide-induced increases of both TNF-α mRNA and protein levels were diminished by treatment with either pomegranate leaf hydroalcoholic extract (57 % and 48 % mean reduction, respectively) or sodium diclofenac (41 % and 33 % reduction, respectively). Additionally, the numbers of peritoneal leukocytes, especially neutrophils, were markedly reduced in hydroalcoholic extract-treated rats with acute peritonitis. These results demonstrate that pomegranate leaf extract may be used as an anti-inflammatory drug which suppresses the levels of TNF-α in acute inflammation.",
"title": ""
},
{
"docid": "ee1bc0f1681240eaf630421bb2e6fbb2",
"text": "The Caltech Multi-Vehicle Wireless Testbed (MVWT) is a platform designed to explore theoretical advances in multi-vehicle coordination and control, networked control systems and high confidence distributed computation. The contribution of this report is to present simulation and experimental results on the generation and implementation of optimal trajectories for the MVWT vehicles. The vehicles are nonlinear, spatially constrained and their input controls are bounded. The trajectories are generated using the NTG software package developed at Caltech. Minimum time trajectories and the application of Model Predictive Control (MPC) are investigated.",
"title": ""
},
{
"docid": "8fd049da24568dea2227483415532f9b",
"text": "The notion of “semiotic scaffolding”, introduced into the semiotic discussions by Jesper Hoffmeyer in December of 2000, is proving to be one of the single most important concepts for the development of semiotics as we seek to understand the full extent of semiosis and the dependence of evolution, particularly in the living world, thereon. I say “particularly in the living world”, because there has been from the first a stubborn resistance among semioticians to seeing how a semiosis prior to and/or independent of living beings is possible. Yet the universe began in a state not only lifeless but incapable of supporting life, and somehow “moved” from there in the direction of being able to sustain life and finally of actually doing so. Wherever dyadic interactions result indirectly in a new condition that either moves the universe closer to being able to sustain life, or moves life itself in the direction not merely of sustaining itself but opening the way to new forms of life, we encounter a “thirdness” in nature of exactly the sort that semiosic triadicity alone can explain. This is the process, both within and without the living world, that requires scaffolding. This essay argues that a fuller understanding of this concept shows why “semiosis” says clearly what “evolution” says obscurely.",
"title": ""
},
{
"docid": "2c3877232f71fe80a21fff336256c627",
"text": "OBJECTIVE\nTo examine the association between absence of the nasal bone at the 11-14-week ultrasound scan and chromosomal defects.\n\n\nMETHODS\nUltrasound examination was carried out in 3829 fetuses at 11-14 weeks' gestation immediately before fetal karyotyping. At the scan the fetal crown-rump length (CRL) and nuchal translucency (NT) thickness were measured and the fetal profile was examined for the presence or absence of the nasal bone. Maternal characteristics including ethnic origin were also recorded.\n\n\nRESULTS\nThe fetal profile was successfully examined in 3788 (98.9%) cases. In 3358/3788 cases the fetal karyotype was normal and in 430 it was abnormal. In the chromosomally normal group the incidence of absent nasal bone was related firstly to the ethnic origin of the mother (2.8% for Caucasians, 10.4% for Afro-Caribbeans and 6.8% for Asians), secondly to fetal CRL (4.6% for CRL of 45-54 mm, 3.9% for CRL of 55-64 mm, 1.5% for CRL of 65-74 mm and 1.0% for CRL of 75-84 mm) and thirdly, to NT thickness, (1.8% for NT < 2.5 mm, 3.4% for NT 2.5-3.4 mm, 5.0% for NT 3.5-4.4 mm and 11.8% for NT > or = 4.5 mm. In the chromosomally abnormal group the nasal bone was absent in 161/242 (66.9%) with trisomy 21, in 48/84 (57.1%) with trisomy 18, in 7/22 (31.8%) with trisomy 13, in 3/34 (8.8%) with Turner syndrome and in 4/48 (8.3%) with other defects.\n\n\nCONCLUSION\nAt the 11-14-week scan the incidence of absent nasal bone is related to the presence or absence of chromosomal defects, CRL, NT thickness and ethnic origin.",
"title": ""
},
{
"docid": "6064bdefac3e861bcd46fa303b0756be",
"text": "Some models of textual corpora employ text generation methods involving n-gram statistics, while others use latent topic variables inferred using the \"bag-of-words\" assumption, in which word order is ignored. Previously, these methods have not been combined. In this work, I explore a hierarchical generative probabilistic model that incorporates both n-gram statistics and latent topic variables by extending a unigram topic model to include properties of a hierarchical Dirichlet bigram language model. The model hyperparameters are inferred using a Gibbs EM algorithm. On two data sets, each of 150 documents, the new model exhibits better predictive accuracy than either a hierarchical Dirichlet bigram language model or a unigram topic model. Additionally, the inferred topics are less dominated by function words than are topics discovered using unigram statistics, potentially making them more meaningful.",
"title": ""
},
{
"docid": "86f1142b9ee47dfbe110907d72a4803a",
"text": "Existing studies on semantic parsing mainly focus on the in-domain setting. We formulate cross-domain semantic parsing as a domain adaptation problem: train a semantic parser on some source domains and then adapt it to the target domain. Due to the diversity of logical forms in different domains, this problem presents unique and intriguing challenges. By converting logical forms into canonical utterances in natural language, we reduce semantic parsing to paraphrasing, and develop an attentive sequence-to-sequence paraphrase model that is general and flexible to adapt to different domains. We discover two problems, small micro variance and large macro variance, of pretrained word embeddings that hinder their direct use in neural networks, and propose standardization techniques as a remedy. On the popular OVERNIGHT dataset, which contains eight domains, we show that both cross-domain training and standardized pre-trained word embedding can bring significant improvement.",
"title": ""
},
{
"docid": "5f5c74e55d9be8cbad43c543d6898f3d",
"text": "Multi robot systems are envisioned to play an important role in many robotic applications. A main prerequisite for a team deployed in a wide unknown area is the capability of autonomously navigate, exploiting the information acquired through the on-line estimation of both robot poses and surrounding environment model, according to Simultaneous Localization And Mapping (SLAM) framework. As team coordination is improved, distributed techniques for filtering are required in order to enhance autonomous exploration and large scale SLAM increasing both efficiency and robustness of operation. Although Rao-Blackwellized Particle Filters (RBPF) have been demonstrated to be an effective solution to the problem of single robot SLAM, few extensions to teams of robots exist, and these approaches are characterized by strict assumptions on both communication bandwidth and prior knowledge on relative poses of the teammates. In the present paper we address the problem of multi robot SLAM in the case of limited communication and unknown relative initial poses. Starting from the well established single robot RBPF-SLAM, we propose a simple technique which jointly estimates SLAM posterior of the robots by fusing the prioceptive and the eteroceptive information acquired by each teammate. The approach intrinsically reduces the amount of data to be exchanged among the robots, while taking into account the uncertainty in relative pose measurements. Moreover it can be naturally extended to different communication technologies (bluetooth, RFId, wifi, etc.) regardless their sensing range. The proposed approach is validated through experimental test.",
"title": ""
},
{
"docid": "5f8ddaa9130446373a9a5d44c17ca604",
"text": "Object detection is a crucial task for autonomous driving. In addition to requiring high accuracy to ensure safety, object detection for autonomous driving also requires realtime inference speed to guarantee prompt vehicle control, as well as small model size and energy efficiency to enable embedded system deployment.,,,,,, In this work, we propose SqueezeDet, a fully convolutional neural network for object detection that aims to simultaneously satisfy all of the above constraints. In our network we use convolutional layers not only to extract feature maps, but also as the output layer to compute bounding boxes and class probabilities. The detection pipeline of our model only contains a single forward pass of a neural network, thus it is extremely fast. Our model is fullyconvolutional, which leads to small model size and better energy efficiency. Finally, our experiments show that our model is very accurate, achieving state-of-the-art accuracy on the KITTI [10] benchmark. The source code of SqueezeDet is open-source released.",
"title": ""
},
{
"docid": "48cdea9a78353111d236f6d0f822dc3a",
"text": "Support vector machines (SVMs) with the gaussian (RBF) kernel have been popular for practical use. Model selection in this class of SVMs involves two hyper parameters: the penalty parameter C and the kernel width . This letter analyzes the behavior of the SVM classifier when these hyper parameters take very small or very large values. Our results help in understanding the hyperparameter space that leads to an efficient heuristic method of searching for hyperparameter values with small generalization errors. The analysis also indicates that if complete model selection using the gaussian kernel has been conducted, there is no need to consider linear SVM.",
"title": ""
},
{
"docid": "e22495967c8ed452f552ab79fa7333be",
"text": "Recently developed object detectors employ a convolutional neural network (CNN) by gradually increasing the number of feature layers with a pyramidal shape instead of using a featurized image pyramid. However, the different abstraction levels of CNN feature layers often limit the detection performance, especially on small objects. To overcome this limitation, we propose a CNN-based object detection architecture, referred to as a parallel feature pyramid (FP) network (PFPNet), where the FP is constructed by widening the network width instead of increasing the network depth. First, we adopt spatial pyramid pooling and some additional feature transformations to generate a pool of feature maps with different sizes. In PFPNet, the additional feature transformation is performed in parallel, which yields the feature maps with similar levels of semantic abstraction across the scales. We then resize the elements of the feature pool to a uniform size and aggregate their contextual information to generate each level of the final FP. The experimental results confirmed that PFPNet increases the performance of the latest version of the single-shot multi-box detector (SSD) by mAP of 6.4% AP and especially, 7.8% APsmall on the MS-COCO dataset.",
"title": ""
},
{
"docid": "18011cbde7d1a16da234c1e886371a6c",
"text": "The increased prevalence of cardiovascular disease among the aging population has prompted greater interest in the field of smart home monitoring and unobtrusive cardiac measurements. This paper introduces the design of a capacitive electrocardiogram (ECG) sensor that measures heart rate with no conscious effort from the user. The sensor consists of two active electrodes and an analog processing circuit that is low cost and customizable to the surfaces of common household objects. Prototype testing was performed in a home laboratory by embedding the sensor into a couch, walker, office and dining chairs. The sensor produced highly accurate heart rate measurements (<; 2.3% error) via either direct skin contact or through one and two layers of clothing. The sensor requires no gel dielectric and no grounding electrode, making it particularly suited to the “zero-effort” nature of an autonomous smart home environment. Motion artifacts caused by deviations in body contact with the electrodes were identified as the largest source of unreliability in continuous ECG measurements and will be a primary focus in the next phase of this project.",
"title": ""
},
{
"docid": "4fe8d9aef1b635d0dc8e6d1dfb3b17f3",
"text": "A fully differential architecture from the antenna to the integrated circuit is proposed for radio transceivers in this paper. The physical implementation of the architecture into truly single-chip radio transceivers is described for the first time. Two key building blocks, the differential antenna and the differential transmit-receive (T-R) switch, were designed, fabricated, and tested. The differential antenna implemented in a package in low-temperature cofired-ceramic technology achieved impedance bandwidth of 2%, radiation efficiency of 84%, and gain of 3.2 dBi at 5.425 GHz in a size of 15 x 15 x 1.6 mm3. The differential T-R switch in a standard complementary metal-oxide-semiconductor technology achieved 1.8-dB insertion loss, 15-dB isolation, and 15-dBm 1-dB power compression point (P 1dB) without using additional techniques to enhance the linearity at 5.425 GHz in a die area of 60 x 40 mum2.",
"title": ""
},
{
"docid": "3473e863d335725776281fe2082b756f",
"text": "Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.",
"title": ""
},
{
"docid": "5c0b73ea742ce615d072ae5eedc7f56f",
"text": "Drama, at least according to the Aristotelian view, is effective inasmuch as it successfully mirrors real aspects of human behavior. This leads to the hypothesis that successful dramas will portray fictional social networks that have the same properties as those typical of human beings across ages and cultures. We outline a methodology for investigating this hypothesis and use it to examine ten of Shakespeare's plays. The cliques and groups portrayed in the plays correspond closely to those which have been observed in spontaneous human interaction, including in hunter-gatherer societies, and the networks of the plays exhibit \"small world\" properties of the type which have been observed in many human-made and natural systems.",
"title": ""
},
{
"docid": "4328277c12a0f2478446841f8082601f",
"text": "Short circuit protection remains one of the major technical barriers in DC microgrids. This paper reviews state of the art of DC solid state circuit breakers (SSCBs). A new concept of a self-powered SSCB using normally-on wideband gap (WBG) semiconductor devices as the main static switch is described in this paper. The new SSCB detects short circuit faults by sensing its terminal voltage rise, and draws power from the fault condition itself to turn and hold off the static switch. The new two-terminal SSCB can be directly placed in a circuit branch without requiring any external power supply or extra wiring. Challenges and future trends in protecting low voltage distribution microgrids against short circuit and other faults are discussed.",
"title": ""
}
] |
scidocsrr
|
e67f28a335c0596d3ea636c5722ff0e2
|
Comparison between security majors in virtual machine and linux containers
|
[
{
"docid": "b7222f86da6f1e44bd1dca88eb59dc4b",
"text": "A virtualized system includes a new layer of software, the virtual machine monitor. The VMM's principal role is to arbitrate accesses to the underlying physical host platform's resources so that multiple operating systems (which are guests of the VMM) can share them. The VMM presents to each guest OS a set of virtual platform interfaces that constitute a virtual machine (VM). Once confined to specialized, proprietary, high-end server and mainframe systems, virtualization is now becoming more broadly available and is supported in off-the-shelf systems based on Intel architecture (IA) hardware. This development is due in part to the steady performance improvements of IA-based systems, which mitigates traditional virtualization performance overheads. Intel virtualization technology provides hardware support for processor virtualization, enabling simplifications of virtual machine monitor software. Resulting VMMs can support a wider range of legacy and future operating systems while maintaining high performance.",
"title": ""
}
] |
[
{
"docid": "11a4536e40dde47e024d4fe7541b368c",
"text": "Building Information Modeling (BIM) provides an integrated 3D environment to manage large-scale engineering projects. The Architecture, Engineering and Construction (AEC) industry explores 4D visualizations over these datasets for virtual construction planning. However, existing solutions lack adequate visual mechanisms to inspect the underlying schedule and make inconsistencies readily apparent. The goal of this paper is to apply best practices of information visualization to improve 4D analysis of construction plans. We first present a review of previous work that identifies common use cases and limitations. We then consulted with AEC professionals to specify the main design requirements for such applications. These guided the development of CasCADe, a novel 4D visualization system where task sequencing and spatio-temporal simultaneity are immediately apparent. This unique framework enables the combination of diverse analytical features to create an information-rich analysis environment. We also describe how engineering collaborators used CasCADe to review the real-world construction plans of an Oil & Gas process plant. The system made evident schedule uncertainties, identified work-space conflicts and helped analyze other constructability issues. The results and contributions of this paper suggest new avenues for future research in information visualization for the AEC industry.",
"title": ""
},
{
"docid": "9dbd988e0e7510ddf4fce9d5a216f9d6",
"text": "Tooth abutments can be prepared to receive fixed dental prostheses with different types of finish lines. The literature reports different complications arising from tooth preparation techniques, including gingival recession. Vertical preparation without a finish line is a technique whereby the abutments are prepared by introducing a diamond rotary instrument into the sulcus to eliminate the cementoenamel junction and to create a new prosthetic cementoenamel junction determined by the prosthetic margin. This article describes 2 patients whose dental abutments were prepared to receive ceramic restorations using vertical preparation without a finish line.",
"title": ""
},
{
"docid": "4b6b9539468db238d92e9762b2650b61",
"text": "The previous chapters gave an insightful introduction into the various facets of Business Process Management. We now share a rich understanding of the essential ideas behind designing and managing processes for organizational purposes. We have also learned about the various streams of research and development that have influenced contemporary BPM. As a matter of fact, BPM has become a holistic management discipline. As such, it requires that a plethora of facets needs to be addressed for its successful und sustainable application. This chapter provides a framework that consolidates and structures the essential factors that constitute BPM as a whole. Drawing from research in the field of maturity models, we suggest six core elements of BPM: strategic alignment, governance, methods, information technology, people, and culture. These six elements serve as the structure for this BPM Handbook. 1 Why Looking for BPM Core Elements? A recent global study by Gartner confirmed the significance of BPM with the top issue for CIOs identified for the sixth year in a row being the improvement of business processes (Gartner 2010). While such an interest in BPM is beneficial for professionals in this field, it also increases the expectations and the pressure to deliver on the promises of the process-centered organization. This context demands a sound understanding of how to approach BPM and a framework that decomposes the complexity of a holistic approach such as Business Process Management. A framework highlighting essential building blocks of BPM can particularly serve the following purposes: M. Rosemann (*) Information Systems Discipline, Faculty of Science and Technology, Queensland University of Technology, Brisbane, Australia e-mail: [email protected] J. vom Brocke and M. Rosemann (eds.), Handbook on Business Process Management 1, International Handbooks on Information Systems, DOI 10.1007/978-3-642-00416-2_5, # Springer-Verlag Berlin Heidelberg 2010 107 l Project and Program Management: How can all relevant issues within a BPM approach be safeguarded? When implementing a BPM initiative, either as a project or as a program, is it essential to individually adjust the scope and have different BPM flavors in different areas of the organization? What competencies are relevant? What approach fits best with the culture and BPM history of the organization? What is it that needs to be taken into account “beyond modeling”? People for one thing play an important role like Hammer has pointed out in his chapter (Hammer 2010), but what might be further elements of relevance? In order to find answers to these questions, a framework articulating the core elements of BPM provides invaluable advice. l Vendor Management: How can service and product offerings in the field of BPM be evaluated in terms of their overall contribution to successful BPM? What portfolio of solutions is required to address the key issues of BPM, and to what extent do these solutions need to be sourced from outside the organization? There is, for example, a large list of providers of process-aware information systems, change experts, BPM training providers, and a variety of BPM consulting services. How can it be guaranteed that these offerings cover the required capabilities? In fact, the vast number of BPM offerings does not meet the requirements as distilled in this Handbook; see for example, Hammer (2010), Davenport (2010), Harmon (2010), and Rummler and Ramias (2010). It is also for the purpose of BPM make-or-buy decisions and the overall vendor management, that a framework structuring core elements of BPM is highly needed. l Complexity Management: How can the complexity that results from the holistic and comprehensive nature of BPM be decomposed so that it becomes manageable? How can a number of coexisting BPM initiatives within one organization be synchronized? An overarching picture of BPM is needed in order to provide orientation for these initiatives. Following a “divide-and-conquer” approach, a shared understanding of the core elements can help to focus on special factors of BPM. For each element, a specific analysis could be carried out involving experts from the various fields. Such an assessment should be conducted by experts with the required technical, business-oriented, and socio-cultural know-how. l Standards Management: What elements of BPM need to be standardized across the organization? What BPM elements need to be mandated for every BPM initiative? What BPM elements can be configured individually within each initiative? A comprehensive framework allows an element-by-element decision for the degrees of standardization that are required. For example, it might be decided that a company-wide process model repository will be “enforced” on all BPM initiatives, while performance management and cultural change will be decentralized activities. l Strategy Management: What is the BPM strategy of the organization? How does this strategy materialize in a BPM roadmap? How will the naturally limited attention of all involved stakeholders be distributed across the various BPM elements? How do we measure progression in a BPM initiative (“BPM audit”)? 108 M. Rosemann and J. vom Brocke",
"title": ""
},
{
"docid": "e6d359934523ed73b2f9f2ac66fd6096",
"text": "We investigate a novel and important application domain for deep RL: network routing. The question of whether/when traditional network protocol design, which relies on the application of algorithmic insights by human experts, can be replaced by a data-driven approach has received much attention recently. We explore this question in the context of the, arguably, most fundamental networking task: routing. Can ideas and techniques from machine learning be leveraged to automatically generate “good” routing configurations? We observe that the routing domain poses significant challenges for data-driven network protocol design and report on preliminary results regarding the power of data-driven routing. Our results suggest that applying deep reinforcement learning to this context yields high performance and is thus a promising direction for further research. We outline a research agenda for data-driven routing.",
"title": ""
},
{
"docid": "a2314ce56557135146e43f0d4a02782d",
"text": "This paper proposes a carrier-based pulse width modulation (CB-PWM) method with synchronous switching technique for a Vienna rectifier. In this paper, a Vienna rectifier is one of the 3-level converter topologies. It is similar to a 3-level T-type topology the used back-to-back switches. When CB-PWM switching method is used, a Vienna rectifier is operated with six PWM signals. On the other hand, when the back-to-back switches are synchronized, PWM signals can be reduced to three from six. However, the synchronous switching method has a problem that the current distortion around zero-crossing point is worse than one of the conventional CB-PWM switching method. To improve current distortions, this paper proposes a reactive current injection technique. The performance and effectiveness of the proposed synchronous switching method are verified by simulation with a 5-kW Vienna rectifier.",
"title": ""
},
{
"docid": "e50ce59ede6ad5c7a89309aed6aa06aa",
"text": "In this paper, we discuss our ongoing efforts to construct a scientific paper browsing system that helps users to read and understand advanced technical content distributed in PDF. Since PDF is a format specifically designed for printing, layout and logical structures of documents are indistinguishably embedded in the file. It requires much effort to extract natural language text from PDF files, and reversely, display semantic annotations produced by NLP tools on the original page layout. In our browsing system, we tackle these issues caused by the gap between printable document and plain text. Our system provides ways to extract natural language sentences from PDF files together with their logical structures, and also to map arbitrary textual spans to their corresponding regions on page images. We setup a demonstration system using papers published in ACL anthology and demonstrate the enhanced search and refined recommendation functions which we plan to make widely available to NLP researchers.",
"title": ""
},
{
"docid": "53a962f9ab6dacf876e129ce3039fd69",
"text": "9 Background. Office of Academic Affairs (OAA), Office of Student Life (OSL) and Information Technology Helpdesk (ITD) are support functions within a university which receives hundreds of email messages on the daily basis. A large percentage of emails received by these departments are frequent and commonly used queries or request for information. Responding to every query by manually typing is a tedious and time consuming task and an automated approach for email response suggestion can save lot of time. 10",
"title": ""
},
{
"docid": "51f437b8631ba8494d3cf3bad7578794",
"text": "The notion of a <italic>program slice</italic>, originally introduced by Mark Weiser, is useful in program debugging, automatic parallelization, and program integration. A slice of a program is taken with respect to a program point <italic>p</italic> and a variable <italic>x</italic>; the slice consists of all statements of the program that might affect the value of <italic>x</italic> at point <italic>p</italic>. This paper concerns the problem of interprocedural slicing—generating a slice of an entire program, where the slice crosses the boundaries of procedure calls. To solve this problem, we introduce a new kind of graph to represent programs, called a <italic>system dependence graph</italic>, which extends previous dependence representations to incorporate collections of procedures (with procedure calls) rather than just monolithic programs. Our main result is an algorithm for interprocedural slicing that uses the new representation. (It should be noted that our work concerns a somewhat restricted kind of slice: rather than permitting a program to be sliced with respect to program point <italic>p</italic> and an <italic>arbitrary</italic> variable, a slice must be taken with respect to a variable that is <italic>defined</italic> or <italic>used</italic> at <italic>p</italic>.)\nThe chief difficulty in interprocedural slicing is correctly accounting for the calling context of a called procedure. To handle this problem, system dependence graphs include some data dependence edges that represent <italic>transitive</italic> dependences due to the effects of procedure calls, in addition to the conventional direct-dependence edges. These edges are constructed with the aid of an auxiliary structure that represents calling and parameter-linkage relationships. This structure takes the form of an attribute grammar. The step of computing the required transitive-dependence edges is reduced to the construction of the subordinate characteristic graphs for the grammar's nonterminals.",
"title": ""
},
{
"docid": "efa21b9d1cd973fc068074b94b60bf08",
"text": "BACKGROUND\nDermoscopy is useful in evaluating skin tumours, but its applicability also extends into the field of inflammatory skin disorders. Discoid lupus erythematosus (DLE) represents the most common subtype of cutaneous lupus erythematosus. While dermoscopy and videodermoscopy have been shown to aid the differentiation of scalp DLE from other causes of scarring alopecia, limited data exist concerning dermoscopic criteria of DLE in other locations, such as the face, trunk and extremities.\n\n\nOBJECTIVE\nTo describe the dermoscopic criteria observed in a series of patients with DLE located on areas other than the scalp, and to correlate them to the underlying histopathological alterations.\n\n\nMETHODS\nDLE lesions located on the face, trunk and extremities were dermoscopically and histopathologically examined. Selection of the dermoscopic variables included in the evaluation process was based on data in the available literature on DLE of the scalp and on our preliminary observations. Analysis of data was done with SPSS analysis software.\n\n\nRESULTS\nFifty-five lesions from 37 patients with DLE were included in the study. Perifollicular whitish halo, follicular keratotic plugs and telangiectasias were the most common dermoscopic criteria. Statistical analysis revealed excellent correlation between dermoscopic and histopathological findings. Notably, a time-related alteration of dermoscopic features was observed.\n\n\nCONCLUSIONS\nThe present study provides new insights into the dermoscopic variability of DLE located on the face, trunk and extremities.",
"title": ""
},
{
"docid": "8e2006ca72dbc6be6592e21418b7f3ba",
"text": "In this paper, we survey the techniques for image-based rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, image-based rendering techniques render novel views directly from input images. Previous image-based rendering techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative methods. The continuum between images and geometry used in image-based rendering techniques suggests that image-based rendering with traditional 3D graphics can be united in a joint image and geometry space.",
"title": ""
},
{
"docid": "6e527b021720cc006ec18a996abf36b5",
"text": "Flow cytometry is a sophisticated instrument measuring multiple physical characteristics of a single cell such as size and granularity simultaneously as the cell flows in suspension through a measuring device. Its working depends on the light scattering features of the cells under investigation, which may be derived from dyes or monoclonal antibodies targeting either extracellular molecules located on the surface or intracellular molecules inside the cell. This approach makes flow cytometry a powerful tool for detailed analysis of complex populations in a short period of time. This review covers the general principles and selected applications of flow cytometry such as immunophenotyping of peripheral blood cells, analysis of apoptosis and detection of cytokines. Additionally, this report provides a basic understanding of flow cytometry technology essential for all users as well as the methods used to analyze and interpret the data. Moreover, recent progresses in flow cytometry have been discussed in order to give an opinion about the future importance of this technology.",
"title": ""
},
{
"docid": "109a1276cd743a522b9e0a36b9b58f32",
"text": "This study examined the effects of a virtual reality distraction intervention on chemotherapy-related symptom distress levels in 16 women aged 50 and older. A cross-over design was used to answer the following research questions: (1) Is virtual reality an effective distraction intervention for reducing chemotherapy-related symptom distress levels in older women with breast cancer? (2) Does virtual reality have a lasting effect? Chemotherapy treatments are intensive and difficult to endure. One way to cope with chemotherapy-related symptom distress is through the use of distraction. For this study, a head-mounted display (Sony PC Glasstron PLM - S700) was used to display encompassing images and block competing stimuli during chemotherapy infusions. The Symptom Distress Scale (SDS), Revised Piper Fatigue Scale (PFS), and the State Anxiety Inventory (SAI) were used to measure symptom distress. For two matched chemotherapy treatments, one pre-test and two post-test measures were employed. Participants were randomly assigned to receive the VR distraction intervention during one chemotherapy treatment and received no distraction intervention (control condition) during an alternate chemotherapy treatment. Analysis using paired t-tests demonstrated a significant decrease in the SAI (p = 0.10) scores immediately following chemotherapy treatments when participants used VR. No significant changes were found in SDS or PFS values. There was a consistent trend toward improved symptoms on all measures 48 h following completion of chemotherapy. Evaluation of the intervention indicated that women thought the head mounted device was easy to use, they experienced no cybersickness, and 100% would use VR again.",
"title": ""
},
{
"docid": "b15ed1584eb030fba1ab3c882983dbf0",
"text": "The need for automated grading tools for essay writing and open-ended assignments has received increasing attention due to the unprecedented scale of Massive Online Courses (MOOCs) and the fact that more and more students are relying on computers to complete and submit their school work. In this paper, we propose an efficient memory networks-powered automated grading model. The idea of our model stems from the philosophy that with enough graded samples for each score in the rubric, such samples can be used to grade future work that is found to be similar. For each possible score in the rubric, a student response graded with the same score is collected. These selected responses represent the grading criteria specified in the rubric and are stored in the memory component. Our model learns to predict a score for an ungraded response by computing the relevance between the ungraded response and each selected response in memory. The evaluation was conducted on the Kaggle Automated Student Assessment Prize (ASAP) dataset. The results show that our model achieves state-of-the-art performance in 7 out of 8 essay sets.",
"title": ""
},
{
"docid": "7cb07d4ed42409269337c6ea1d6aa5c6",
"text": "Since a parallel structure is a closed kinematics chain, all legs are connected from the origin of the tool point by a parallel connection. This connection allows a higher precision and a higher velocity. Parallel kinematic manipulators have better performance compared to serial kinematic manipulators in terms of a high degree of accuracy, high speeds or accelerations and high stiffness. Therefore, they seem perfectly suitable for industrial high-speed applications, such as pick-and-place or micro and high-speed machining. They are used in many fields such as flight simulation systems, manufacturing and medical applications. One of the most popular parallel manipulators is the general purpose 6 degree of freedom (DOF) Stewart Platform (SP) proposed by Stewart in 1965 as a flight simulator (Stewart, 1965). It consists of a top plate (moving platform), a base plate (fixed base), and six extensible legs connecting the top plate to the bottom plate. SP employing the same architecture of the Gough mechanism (Merlet, 1999) is the most studied type of parallel manipulators. This is also known as Gough–Stewart platforms in literature. Complex kinematics and dynamics often lead to model simplifications decreasing the accuracy. In order to overcome this problem, accurate kinematic and dynamic identification is needed. The kinematic and dynamic modeling of SP is extremely complicated in comparison with serial robots. Typically, the robot kinematics can be divided into forward kinematics and inverse kinematics. For a parallel manipulator, inverse kinematics is straight forward and there is no complexity deriving the equations. However, forward kinematics of SP is very complicated and difficult to solve since it requires the solution of many non-linear equations. Moreover, the forward kinematic problem generally has more than one solution. As a result, most research papers concentrated on the forward kinematics of the parallel manipulators (Bonev and Ryu, 2000; Merlet, 2004; Harib and Srinivasan, 2003; Wang, 2007). For the design and the control of the SP manipulators, the accurate dynamic model is very essential. The dynamic modeling of parallel manipulators is quite complicated because of their closed-loop structure, coupled relationship between system parameters, high nonlinearity in system dynamics and kinematic constraints. Robot dynamic modeling can be also divided into two topics: inverse and forward dynamic model. The inverse dynamic model is important for system control while the forward model is used for system simulation. To obtain the dynamic model of parallel manipulators, there are many valuable studies published by many researches in the literature. The dynamic analysis of parallel manipulators has been traditionally performed through several different methods such as",
"title": ""
},
{
"docid": "1f9bbe5628ad4bb37d2b6fa3b53bcc12",
"text": "The problem of consistently estimating the sparsity pattern of a vector beta* isin Rp based on observations contaminated by noise arises in various contexts, including signal denoising, sparse approximation, compressed sensing, and model selection. We analyze the behavior of l1-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern. Our main result is to establish precise conditions on the problem dimension p, the number k of nonzero elements in beta*, and the number of observations n that are necessary and sufficient for sparsity pattern recovery using the Lasso. We first analyze the case of observations made using deterministic design matrices and sub-Gaussian additive noise, and provide sufficient conditions for support recovery and linfin-error bounds, as well as results showing the necessity of incoherence and bounds on the minimum value. We then turn to the case of random designs, in which each row of the design is drawn from a N (0, Sigma) ensemble. For a broad class of Gaussian ensembles satisfying mutual incoherence conditions, we compute explicit values of thresholds 0 < thetasl(Sigma) les thetasu(Sigma) < +infin with the following properties: for any delta > 0, if n > 2 (thetasu + delta) klog (p- k), then the Lasso succeeds in recovering the sparsity pattern with probability converging to one for large problems, whereas for n < 2 (thetasl - delta)klog (p - k), then the probability of successful recovery converges to zero. For the special case of the uniform Gaussian ensemble (Sigma = Iptimesp), we show that thetasl = thetas<u = 1, so that the precise threshold n = 2 klog(p- k) is exactly determined.",
"title": ""
},
{
"docid": "42ecca95c15cd1f92d6e5795f99b414a",
"text": "Personalized tag recommendation systems recommend a list of tags to a user when he is about to annotate an item. It exploits the individual preference and the characteristic of the items. Tensor factorization techniques have been applied to many applications, such as tag recommendation. Models based on Tucker Decomposition can achieve good performance but require a lot of computation power. On the other hand, models based on Canonical Decomposition can run in linear time and are more feasible for online recommendation. In this paper, we propose a novel method for personalized tag recommendation, which can be considered as a nonlinear extension of Canonical Decomposition. Different from linear tensor factorization, we exploit Gaussian radial basis function to increase the model’s capacity. The experimental results show that our proposed method outperforms the state-of-the-art methods for tag recommendation on real datasets and perform well even with a small number of features, which verifies that our models can make better use of features.",
"title": ""
},
{
"docid": "25a13221c6cda8e1d4adf451701e421a",
"text": "Block-based local binary patterns a.k.a. enhanced local binary patterns (ELBPs) have proven to be a highly discriminative descriptor for face recognition and image retrieval. Since this descriptor is mainly composed by histograms, little work (if any) has been done for selecting its relevant features (either the bins or the blocks). In this paper, we address feature selection for both the classic ELBP representation and the recently proposed color quaternionic LBP (QLBP). We introduce a filter method for the automatic weighting of attributes or blocks using an improved version of the margin-based iterative search Simba algorithm. This new improved version introduces two main modifications: (i) the hypothesis margin of a given instance is computed by taking into account the K-nearest neighboring examples within the same class as well as the K-nearest neighboring examples with a different label; (ii) the distances between samples and their nearest neighbors are computed using the weighted $$\\chi ^2$$ χ 2 distance instead of the Euclidean one. This algorithm has been compared favorably with several competing feature selection algorithms including the Euclidean-based Simba as well as variance and Fisher score algorithms giving higher performances. The proposed method is useful for other descriptors that are formed by histograms. Experimental results show that the QLBP descriptor allows an improvement of the accuracy in discriminating faces compared with the ELBP. They also show that the obtained selection (attributes or blocks) can either improve recognition performance or maintain it with a significant reduction in the descriptor size.",
"title": ""
},
{
"docid": "1d7b7ea9f0cc284f447c11902bad6685",
"text": "In the last few years the efficiency of secure multi-party computation (MPC) increased in several orders of magnitudes. However, this alone might not be enough if we want MPC protocols to be used in practice. A crucial property that is needed in many applications is that everyone can check that a given (secure) computation was performed correctly – even in the extreme case where all the parties involved in the computation are corrupted, and even if the party who wants to verify the result was not participating. This is especially relevant in the clients-servers setting, where many clients provide input to a secure computation performed by a few servers. An obvious example of this is electronic voting, but also in many types of auctions one may want independent verification of the result. Traditionally, this is achieved by using non-interactive zero-knowledge proofs during the computation. A recent trend in MPC protocols is to have a more expensive preprocessing phase followed by a very efficient online phase, e.g., the recent so-called SPDZ protocol by Damg̊ard et al. Applications such as voting and some auctions are perfect use-case for these protocols, as the parties usually know well in advance when the computation will take place, and using those protocols allows us to use only cheap information-theoretic primitives in the actual computation. Unfortunately no protocol of the SPDZ type supports an audit phase. In this paper, we show how to achieve efficient MPC with a public audit. We formalize the concept of publicly auditable secure computation and provide an enhanced version of the SPDZ protocol where, even if all the servers are corrupted, anyone with access to the transcript of the protocol can check that the output is indeed correct. Most importantly, we do so without significantly compromising the performance of SPDZ i.e. our online phase has complexity approximately twice that of SPDZ.",
"title": ""
},
{
"docid": "1e44c9ab1f66f7bf7dced95bd3f0a122",
"text": "The expression and experience of human behavior are complex and multimodal and characterized by individual and contextual heterogeneity and variability. Speech and spoken language communication cues offer an important means for measuring and modeling human behavior. Observational research and practice across a variety of domains from commerce to healthcare rely on speech- and language-based informatics for crucial assessment and diagnostic information and for planning and tracking response to an intervention. In this paper, we describe some of the opportunities as well as emerging methodologies and applications of human behavioral signal processing (BSP) technology and algorithms for quantitatively understanding and modeling typical, atypical, and distressed human behavior with a specific focus on speech- and language-based communicative, affective, and social behavior. We describe the three important BSP components of acquiring behavioral data in an ecologically valid manner across laboratory to real-world settings, extracting and analyzing behavioral cues from measured data, and developing models offering predictive and decision-making support. We highlight both the foundational speech and language processing building blocks as well as the novel processing and modeling opportunities. Using examples drawn from specific real-world applications ranging from literacy assessment and autism diagnostics to psychotherapy for addiction and marital well being, we illustrate behavioral informatics applications of these signal processing techniques that contribute to quantifying higher level, often subjectively described, human behavior in a domain-sensitive fashion.",
"title": ""
},
{
"docid": "f50d0948319a4487b43b94bac09e5fab",
"text": "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.",
"title": ""
}
] |
scidocsrr
|
5bb3b78d1b976ec8a8b48f408a80c8a2
|
CamBP: a camera-based, non-contact blood pressure monitor
|
[
{
"docid": "2531d8d05d262c544a25dbffb7b43d67",
"text": "Plethysmographic signals were measured remotely (> 1m) using ambient light and a simple consumer level digital camera in movie mode. Heart and respiration rates could be quantified up to several harmonics. Although the green channel featuring the strongest plethysmographic signal, corresponding to an absorption peak by (oxy-) hemoglobin, the red and blue channels also contained plethysmographic information. The results show that ambient light photo-plethysmography may be useful for medical purposes such as characterization of vascular skin lesions (e.g., port wine stains) and remote sensing of vital signs (e.g., heart and respiration rates) for triage or sports purposes.",
"title": ""
}
] |
[
{
"docid": "28a6a89717b3894b181e746e684cfad5",
"text": "When information is abundant, it becomes increasingly difficult to fit nuggets of knowledge into a single coherent picture. Complex stories spaghetti into branches, side stories, and intertwining narratives. In order to explore these stories, one needs a map to navigate unfamiliar territory. We propose a methodology for creating structured summaries of information, which we call metro maps. Our proposed algorithm generates a concise structured set of documents maximizing coverage of salient pieces of information. Most importantly, metro maps explicitly show the relations among retrieved pieces in a way that captures story development. We first formalize characteristics of good maps and formulate their construction as an optimization problem. Then we provide efficient methods with theoretical guarantees for generating maps. Finally, we integrate user interaction into our framework, allowing users to alter the maps to better reflect their interests. Pilot user studies with a real-world dataset demonstrate that the method is able to produce maps which help users acquire knowledge efficiently.",
"title": ""
},
{
"docid": "7b3dd8bdc75bf99f358ef58b2d56e570",
"text": "This paper studies asset allocation decisions in the presence of regime switching in asset returns. We find evidence that four separate regimes characterized as crash, slow growth, bull and recovery states are required to capture the joint distribution of stock and bond returns. Optimal asset allocations vary considerably across these states and change over time as investors revise their estimates of the state probabilities. In the crash state, buy-and-hold investors allocate more of their portfolio to stocks the longer their investment horizon, while the optimal allocation to stocks declines as a function of the investment horizon in bull markets. The joint effects of learning about state probabilities and predictability of asset returns from the dividend yield give rise to a non-monotonic relationship between the investment horizon and the demand for stocks. Welfare costs from ignoring regime switching can be substantial even after accounting for parameter uncertainty. Out-of-sample forecasting experiments confirm the economic importance of accounting for the presence of regimes in asset returns.",
"title": ""
},
{
"docid": "c31dbdee3c36690794f3537c61cfc1e3",
"text": "Shape memory alloy (SMA) actuators, which have ability to return to a predetermined shape when heated, have many potential applications in aeronautics, surgical tools, robotics and so on. Although the number of applications is increasing, there has been limited success in precise motion control since the systems are disturbed by unknown factors beside their inherent nonlinear hysteresis or the surrounding environment of the systems is changed. This paper presents a new development of SMA position control system by using self-tuning fuzzy PID controller. The use of this control algorithm is to tune the parameters of the PID controller by integrating fuzzy inference and producing a fuzzy adaptive PID controller that can be used to improve the control performance of nonlinear systems. The experimental results of position control of SMA actuators using conventional and self tuning fuzzy PID controller are both included in this paper",
"title": ""
},
{
"docid": "9497731525a996844714d5bdbca6ae03",
"text": "Recently, machine learning is widely used in applications and cloud services. And as the emerging field of machine learning, deep learning shows excellent ability in solving complex learning problems. To give users better experience, high performance implementations of deep learning applications seem very important. As a common means to accelerate algorithms, FPGA has high performance, low power consumption, small size and other characteristics. So we use FPGA to design a deep learning accelerator, the accelerator focuses on the implementation of the prediction process, data access optimization and pipeline structure. Compared with Core 2 CPU 2.3GHz, our accelerator can achieve promising result.",
"title": ""
},
{
"docid": "ea8450e8e1a217f1af596bb70051f5e7",
"text": "Supplier selection is nowadays one of the critical topics in supply chain management. This paper presents a new decision making approach for group multi-criteria supplier selection problem, which clubs supplier selection process with order allocation for dynamic supply chains to cope market variations. More specifically, the developed approach imitates the knowledge acquisition and manipulation in a manner similar to the decision makers who have gathered considerable knowledge and expertise in procurement domain. Nevertheless, under many conditions, exact data are inadequate to model real-life situation and fuzzy logic can be incorporated to handle the vagueness of the decision makers. As per this concept, fuzzy-AHP method is used first for supplier selection through four classes (CLASS I: Performance strategy, CLASS II: Quality of service, CLASS III: Innovation and CLASS IV: Risk), which are qualitatively meaningful. Thereafter, using simulation based fuzzy TOPSIS technique, the criteria application is quantitatively evaluated for order allocation among the selected suppliers. As a result, the approach generates decision-making knowledge, and thereafter, the developed combination of rules order allocation can easily be interpreted, adopted and at the same time if necessary, modified by decision makers. To demonstrate the applicability of the proposed approach, an illustrative example is presented and the results analyzed. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2ff17287164ea85e0a41974e5da0ecb6",
"text": "We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parameterised by the score matrices, must alone be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack.",
"title": ""
},
{
"docid": "c673f0f39f874f8dfde363d6a030e8dd",
"text": "Big data refers to data volumes in the range of exabytes (1018) and beyond. Such volumes exceed the capacity of current on-line storage systems and processing systems. Data, information, and knowledge are being created and collected at a rate that is rapidly approaching the exabyte/year range. But, its creation and aggregation are accelerating and will approach the zettabyte/year range within a few years. Volume is only one aspect of big data; other attributes are variety, velocity, value, and complexity. Storage and data transport are technology issues, which seem to be solvable in the near-term, but represent longterm challenges that require research and new paradigms. We analyze the issues and challenges as we begin a collaborative research program into methodologies for big data analysis and design.",
"title": ""
},
{
"docid": "593aae604e5ecd7b6d096ed033a303f8",
"text": "We describe the first mobile app for identifying plant species using automatic visual recognition. The system – called Leafsnap – identifies tree species from photographs of their leaves. Key to this system are computer vision components for discarding non-leaf images, segmenting the leaf from an untextured background, extracting features representing the curvature of the leaf’s contour over multiple scales, and identifying the species from a dataset of the 184 trees in the Northeastern United States. Our system obtains state-of-the-art performance on the real-world images from the new Leafsnap Dataset – the largest of its kind. Throughout the paper, we document many of the practical steps needed to produce a computer vision system such as ours, which currently has nearly a million users.",
"title": ""
},
{
"docid": "63405a3fc4815e869fc872bb96bb8a33",
"text": "We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning. We consider search algorithms for quantified Boolean logics, that already can solve formulas of impressive size up to 100s of thousands of variables. The main challenge is to find a representation which lends to making predictions in a scalable way. The heuristics learned through our approach significantly improve over the handwritten heuristics for several sets of formulas.",
"title": ""
},
{
"docid": "3d10793b2e4e63e7d639ff1e4cdf04b6",
"text": "Research in signal processing shows that a variety of transforms have been introduced to map the data from the original space into the feature space, in order to efficiently analyze a signal. These techniques differ in their basis functions, that is used for projecting the signal into a higher dimensional space. One of the widely used schemes for quasi-stationary and non-stationary signals is the time-frequency (TF) transforms, characterized by specific kernel functions. This work introduces a novel class of Ramanujan Fourier Transform (RFT) based TF transform functions, constituted by Ramanujan sums (RS) basis. The proposed special class of transforms offer high immunity to noise interference, since the computation is carried out only on co-resonant components, during analysis of signals. Further, we also provide a 2-D formulation of the RFT function. Experimental validation using synthetic examples, indicates that this technique shows potential for obtaining relatively sparse TF-equivalent representation and can be optimized for characterization of certain real-life signals.",
"title": ""
},
{
"docid": "e1b6de27518c1c17965a891a8d14a1e1",
"text": "Mobile phones are becoming more and more widely used nowadays, and people do not use the phone only for communication: there is a wide variety of phone applications allowing users to select those that fit their needs. Aggregated over time, application usage patterns exhibit not only what people are consistently interested in but also the way in which they use their phones, and can help improving phone design and personalized services. This work aims at mining automatically usage patterns from apps data recorded continuously with smartphones. A new probabilistic framework for mining usage patterns is proposed. Our methodology involves the design of a bag-of-apps model that robustly represents level of phone usage over specific times of the day, and the use of a probabilistic topic model that jointly discovers patterns of usage over multiple applications and describes users as mixtures of such patterns. Our framework is evaluated using 230 000+ hours of real-life app phone log data, demonstrates that relevant patterns of usage can be extracted, and is objectively validated on a user retrieval task with competitive performance.",
"title": ""
},
{
"docid": "1865cf66083c30d74b555eab827d0f5f",
"text": "Business intelligence and analytics (BIA) is about the development of technologies, systems, practices, and applications to analyze critical business data so as to gain new insights about business and markets. The new insights can be used for improving products and services, achieving better operational efficiency, and fostering customer relationships. In this article, we will categorize BIA research activities into three broad research directions: (a) big data analytics, (b) text analytics, and (c) network analytics. The article aims to review the state-of-the-art techniques and models and to summarize their use in BIA applications. For each research direction, we will also determine a few important questions to be addressed in future research.",
"title": ""
},
{
"docid": "5a3f542176503ddc6fcbd0fe29f08869",
"text": "INTRODUCTION\nArtificial intelligence is a branch of computer science capable of analysing complex medical data. Their potential to exploit meaningful relationship with in a data set can be used in the diagnosis, treatment and predicting outcome in many clinical scenarios.\n\n\nMETHODS\nMedline and internet searches were carried out using the keywords 'artificial intelligence' and 'neural networks (computer)'. Further references were obtained by cross-referencing from key articles. An overview of different artificial intelligent techniques is presented in this paper along with the review of important clinical applications.\n\n\nRESULTS\nThe proficiency of artificial intelligent techniques has been explored in almost every field of medicine. Artificial neural network was the most commonly used analytical tool whilst other artificial intelligent techniques such as fuzzy expert systems, evolutionary computation and hybrid intelligent systems have all been used in different clinical settings.\n\n\nDISCUSSION\nArtificial intelligence techniques have the potential to be applied in almost every field of medicine. There is need for further clinical trials which are appropriately designed before these emergent techniques find application in the real clinical setting.",
"title": ""
},
{
"docid": "fd5c5ff7c97b9d6b6bfabca14631b423",
"text": "The composition and activity of the gut microbiota codevelop with the host from birth and is subject to a complex interplay that depends on the host genome, nutrition, and life-style. The gut microbiota is involved in the regulation of multiple host metabolic pathways, giving rise to interactive host-microbiota metabolic, signaling, and immune-inflammatory axes that physiologically connect the gut, liver, muscle, and brain. A deeper understanding of these axes is a prerequisite for optimizing therapeutic strategies to manipulate the gut microbiota to combat disease and improve health.",
"title": ""
},
{
"docid": "30d0453033d3951f5b5faf3213eacb89",
"text": "Semantic mapping is the incremental process of “mapping” relevant information of the world (i.e., spatial information, temporal events, agents and actions) to a formal description supported by a reasoning engine. Current research focuses on learning the semantic of environments based on their spatial location, geometry and appearance. Many methods to tackle this problem have been proposed, but the lack of a uniform representation, as well as standard benchmarking suites, prevents their direct comparison. In this paper, we propose a standardization in the representation of semantic maps, by defining an easily extensible formalism to be used on top of metric maps of the environments. Based on this, we describe the procedure to build a dataset (based on real sensor data) for benchmarking semantic mapping techniques, also hypothesizing some possible evaluation metrics. Nevertheless, by providing a tool for the construction of a semantic map ground truth, we aim at the contribution of the scientific community in acquiring data for populating the dataset.",
"title": ""
},
{
"docid": "56e1778df9d5b6fa36cbf4caae710e67",
"text": "The Levenberg-Marquardt method is a standard technique used to solve nonlinear least squares problems. Least squares problems arise when fitting a parameterized function to a set of measured data points by minimizing the sum of the squares of the errors between the data points and the function. Nonlinear least squares problems arise when the function is not linear in the parameters. Nonlinear least squares methods involve an iterative improvement to parameter values in order to reduce the sum of the squares of the errors between the function and the measured data points. The Levenberg-Marquardt curve-fitting method is actually a combination of two minimization methods: the gradient descent method and the Gauss-Newton method. In the gradient descent method, the sum of the squared errors is reduced by updating the parameters in the direction of the greatest reduction of the least squares objective. In the Gauss-Newton method, the sum of the squared errors is reduced by assuming the least squares function is locally quadratic, and finding the minimum of the quadratic. The Levenberg-Marquardt method acts more like a gradient-descent method when the parameters are far from their optimal value, and acts more like the Gauss-Newton method when the parameters are close to their optimal value. This document describes these methods and illustrates the use of software to solve nonlinear least squares curve-fitting problems.",
"title": ""
},
{
"docid": "2431ee8fb0dcfd84c61e60ee41a95edb",
"text": "Web applications have become a very popular means of developing software. This is because of many advantages of web applications like no need of installation on each client machine, centralized data, reduction in business cost etc. With the increase in this trend web applications are becoming vulnerable for attacks. Cross site scripting (XSS) is the major threat for web application as it is the most basic attack on web application. It provides the surface for other types of attacks like Cross Site Request Forgery, Session Hijacking etc. There are three types of XSS attacks i.e. non-persistent (or reflected) XSS, persistent (or stored) XSS and DOM-based vulnerabilities. There is one more type that is not as common as those three types, induced XSS. In this work we aim to study and consolidate the understanding of XSS and their origin, manifestation, kinds of dangers and mitigation efforts for XSS. Different approaches proposed by researchers are presented here and an analysis of these approaches is performed. Finally the conclusion is drawn at the end of the work.",
"title": ""
},
{
"docid": "6c1c3bc94314ce1efae62ac3ec605d4a",
"text": "Solar energy is an abundant renewable energy source (RES) which is available without any price from the Sun to the earth. It can be a good alternative of energy source in place of non-renewable sources (NRES) of energy like as fossil fuels and petroleum articles. Sun light can be utilized through solar cells which fulfills the need of energy of the utilizer instead of energy generation by NRES. The development of solar cells has crossed by a number of modifications from one age to another. The cost and efficiency of solar cells are the obstacles in the advancement. In order to select suitable solar photovoltaic (PV) cells for a particular area, operators are needed to sense the basic mechanisms and topologies of diverse solar PV with maximum power point tracking (MPPT) methodologies that are checked to a great degree. In this article, authors reviewed and analyzed a successive growth in the solar PV cell research from one decade to other, and explained about their coming fashions and behaviors. This article also attempts to emphasize on many experiments and technologies to contribute the perks of solar energy.",
"title": ""
},
{
"docid": "0dc1119bf47ffa6d032c464a54d5d173",
"text": "The use of an analogy from a semantically distant domain to guide the problemsolving process was investigated. The representation of analogy in memory and processes involved in the use of analogies were discussed theoretically and explored in five experiments. In Experiment I oral protocols were used to examine the processes involved in solving a problem by analogy. In all experiments subjects who first read a story about a military problem and its solution tended to generate analogous solutions to a medical problem (Duncker’s “radiation problem”), provided they were given a hint to use the story to help solve the problem. Transfer frequency was reduced when the problem presented in the military story was substantially disanalogous to the radiation problem, even though the solution illustrated in the story corresponded to an effective radiation solution (Experiment II). Subjects in Experiment III tended to generate analogous solutions to the radiation problem after providing their own solutions to the military problem. Subjects were able to retrieve the story from memory and use it to generate an analogous solution, even when the critical story had been memorized in the context of two distractor stories (Experiment IV). However, when no hint to consider the story was given, frequency of analogous solutions decreased markedly. This decrease in transfer occurred when the story analogy was presented in a recall task along with distractor stories (Experiment IV), when it was presented alone, and when it was presented in between two attempts to solve the problem (Experiment V). Component processes and strategic variations in analogical problem solving were discussed. Issues related to noticing analogies and accessing them in memory were also examined, as was the relationship of analogical reasoning to other cognitive tasks.",
"title": ""
},
{
"docid": "419499ced8902a00909c32db352ea7f5",
"text": "Software defined networks provide new opportunities for automating the process of network debugging. Many tools have been developed to verify the correctness of network configurations on the control plane. However, due to software bugs and hardware faults of switches, the correctness of control plane may not readily translate into that of data plane. To bridge this gap, we present VeriDP, which can monitor \"whether actual forwarding behaviors are complying with network configurations\". Given that policies are well-configured, operators can leverage VeriDP to monitor the correctness of the network data plane. In a nutshell, VeriDP lets switches tag packets that they forward, and report tags together with headers to the verification server before the packets leave the network. The verification server pre-computes all header-to-tag mappings based on the configuration, and checks whether the reported tags agree with the mappings. We prototype VeriDP with both software and hardware OpenFlow switches, and use emulation to show that VeriDP can detect common data plane fault including black holes and access violations, with a minimal impact on the data plane.",
"title": ""
}
] |
scidocsrr
|
014ee38649c8ac97b152962fc03d8a2d
|
A review of stator fault monitoring techniques of induction motors
|
[
{
"docid": "eda3987f781263615ccf53dd9a7d1a27",
"text": "The study gives a synopsis over condition monitoring methods both as a diagnostic tool and as a technique for failure identification in high voltage induction motors in industry. New running experience data for 483 motor units with 6135 unit years are registered and processed statistically, to reveal the connection between motor data, protection and condition monitoring methods, maintenance philosophy and different types of failures. The different types of failures are further analyzed to failure-initiators, -contributors and -underlying causes. The results have been compared with those of a previous survey, IEEE Report of Large Motor Reliability Survey of Industrial and Commercial Installations, 1985. In the present survey the motors are in the range of 100 to 1300 kW, 47% of them between 100 and 500 kW.",
"title": ""
}
] |
[
{
"docid": "f26f254827efa3fe29301ef31eb8669f",
"text": "Today, computer vision systems are tested by their accuracy in detecting and localizing instances of objects. As an alternative, and motivated by the ability of humans to provide far richer descriptions and even tell a story about an image, we construct a \"visual Turing test\": an operator-assisted device that produces a stochastic sequence of binary questions from a given test image. The query engine proposes a question; the operator either provides the correct answer or rejects the question as ambiguous; the engine proposes the next question (\"just-in-time truthing\"). The test is then administered to the computer-vision system, one question at a time. After the system's answer is recorded, the system is provided the correct answer and the next question. Parsing is trivial and deterministic; the system being tested requires no natural language processing. The query engine employs statistical constraints, learned from a training set, to produce questions with essentially unpredictable answers-the answer to a question, given the history of questions and their correct answers, is nearly equally likely to be positive or negative. In this sense, the test is only about vision. The system is designed to produce streams of questions that follow natural story lines, from the instantiation of a unique object, through an exploration of its properties, and on to its relationships with other uniquely instantiated objects.",
"title": ""
},
{
"docid": "bfd23678afff2ac4cd4650cf46195590",
"text": "The Islamic State of Iraq and ash-Sham (ISIS) continues to use social media as an essential element of its campaign to motivate support. On Twitter, ISIS' unique ability to leverage unaffiliated sympathizers that simply retweet propaganda has been identified as a primary mechanism in their success in motivating both recruitment and \"lone wolf\" attacks. The present work explores a large community of Twitter users whose activity supports ISIS propaganda diffusion in varying degrees. Within this ISIS supporting community, we observe a diverse range of actor types, including fighters, propagandists, recruiters, religious scholars, and unaffiliated sympathizers. The interaction between these users offers unique insight into the people and narratives critical to ISIS' sustainment. In their entirety, we refer to this diverse set of users as an online extremist community or OEC. We present Iterative Vertex Clustering and Classification (IVCC), a scalable analytic approach for OEC detection in annotated heterogeneous networks, and provide an illustrative case study of an online community of over 22,000 Twitter users whose online behavior directly advocates support for ISIS or contibutes to the group's propaganda dissemination through retweets.",
"title": ""
},
{
"docid": "0173524e63580e3a4d85cb41719e0bd6",
"text": "This paper describes some of the possibilities of artificial neural networks that open up after solving the problem of catastrophic forgetting. A simple model and reinforcement learning applications of existing methods are also proposed",
"title": ""
},
{
"docid": "d3c811ec795c04005fb04cdf6eec6b0e",
"text": "We present a new replay-based method of continual classification learning that we term \"conditional replay\" which generates samples and labels together by sampling from a distribution conditioned on the class. We compare conditional replay to another replay-based continual learning paradigm (which we term \"marginal replay\") that generates samples independently of their class and assigns labels in a separate step. The main improvement in conditional replay is that labels for generated samples need not be inferred, which reduces the margin for error in complex continual classification learning tasks. We demonstrate the effectiveness of this approach using novel and standard benchmarks constructed from MNIST and Fashion MNIST data, and compare to the regularization-based EWC method (Kirkpatrick et al., 2016; Shin et al., 2017).",
"title": ""
},
{
"docid": "af3297de35d49f774e2f31f31b09fd61",
"text": "This paper explores the phenomena of the emergence of the use of artificial intelligence in teaching and learning in higher education. It investigates educational implications of emerging technologies on the way students learn and how institutions teach and evolve. Recent technological advancements and the increasing speed of adopting new technologies in higher education are explored in order to predict the future nature of higher education in a world where artificial intelligence is part of the fabric of our universities. We pinpoint some challenges for institutions of higher education and student learning in the adoption of these technologies for teaching, learning, student support, and administration and explore further directions for research.",
"title": ""
},
{
"docid": "f5b8a5e034e8c619cea9cbdaca4a5bef",
"text": "x Acknowledgements xii Chapter",
"title": ""
},
{
"docid": "a413a29f8921bec12d71b8c0156e353b",
"text": "In the present study, we tested the hypothesis that a carbohydrate-protein (CHO-Pro) supplement would be more effective in the replenishment of muscle glycogen after exercise compared with a carbohydrate supplement of equal carbohydrate content (LCHO) or caloric equivalency (HCHO). After 2.5 +/- 0.1 h of intense cycling to deplete the muscle glycogen stores, subjects (n = 7) received, using a rank-ordered design, a CHO-Pro (80 g CHO, 28 g Pro, 6 g fat), LCHO (80 g CHO, 6 g fat), or HCHO (108 g CHO, 6 g fat) supplement immediately after exercise (10 min) and 2 h postexercise. Before exercise and during 4 h of recovery, muscle glycogen of the vastus lateralis was determined periodically by nuclear magnetic resonance spectroscopy. Exercise significantly reduced the muscle glycogen stores (final concentrations: 40.9 +/- 5.9 mmol/l CHO-Pro, 41.9 +/- 5.7 mmol/l HCHO, 40.7 +/- 5.0 mmol/l LCHO). After 240 min of recovery, muscle glycogen was significantly greater for the CHO-Pro treatment (88.8 +/- 4.4 mmol/l) when compared with the LCHO (70.0 +/- 4.0 mmol/l; P = 0.004) and HCHO (75.5 +/- 2.8 mmol/l; P = 0.013) treatments. Glycogen storage did not differ significantly between the LCHO and HCHO treatments. There were no significant differences in the plasma insulin responses among treatments, although plasma glucose was significantly lower during the CHO-Pro treatment. These results suggest that a CHO-Pro supplement is more effective for the rapid replenishment of muscle glycogen after exercise than a CHO supplement of equal CHO or caloric content.",
"title": ""
},
{
"docid": "49ba31f0452a57e29f7bbf15de477f2b",
"text": "In recent years, the IoT(Internet of Things)has been wide spreadly concerned as a new technology architecture integrated by many information technologies. However, IoT is a complex system. If there is no intensive study from the system point of view, it will be very hard to understand the key technologies of IoT. in order to better study IoT and its technologies this paper proposes an extent six-layer architecture of IoT based on network hierarchical structure firstly. then it discusses the key technologies involved in every layer of IoT, such as RFID, WSN, Internet, SOA, cloud computing and Web Service etc. According to this, an automatic recognition system is designed via integrating both RFID and WSN and a strategy is also brought forward to via integrating both RFID and Web Sevice. This result offers the feasibility of technology integration of IoT.",
"title": ""
},
{
"docid": "f6c874435978db83361f62bfe70a6681",
"text": "“Microbiology Topics” discusses various topics in microbiology of practical use in validation and compliance. We intend this column to be a useful resource for daily work applications. Reader comments, questions, and suggestions are needed to help us fulfill our objective for this column. Please send your comments and suggestions to column coordinator Scott Sutton at scott. [email protected] or journal managing editor Susan Haigney at [email protected].",
"title": ""
},
{
"docid": "8813c7c18f0629680f537bdd0afcb1ba",
"text": "A fault-tolerant (FT) control approach for four-wheel independently-driven (4WID) electric vehicles is presented. An adaptive control based passive fault-tolerant controller is designed to ensure the system stability when an in-wheel motor/motor driver fault happens. As an over-actuated system, it is challenging to isolate the faulty wheel and accurately estimate the control gain of the faulty in-wheel motor for 4WID electric vehicles. An active fault diagnosis approach is thus proposed to isolate and evaluate the fault. Based on the estimated control gain of the faulty in-wheel motor, the control efforts of all the four wheels are redistributed to relieve the torque demand on the faulty wheel. Simulations using a high-fidelity, CarSim, full-vehicle model show the effectiveness of the proposed in-wheel motor/motor driver fault diagnosis and fault-tolerant control approach.",
"title": ""
},
{
"docid": "8acae9ae82e89bd4d76bb9a706276149",
"text": "Metastasis leads to poor prognosis in colorectal cancer patients, and there is a growing need for new therapeutic targets. TMEM16A (ANO1, DOG1 or TAOS2) has recently been identified as a calcium-activated chloride channel (CaCC) and is reported to be overexpressed in several malignancies; however, its expression and function in colorectal cancer (CRC) remains unclear. In this study, we found expression of TMEM16A mRNA and protein in high-metastatic-potential SW620, HCT116 and LS174T cells, but not in primary HCT8 and SW480 cells, using RT-PCR, western blotting and immunofluorescence labeling. Patch-clamp recordings detected CaCC currents regulated by intracellular Ca(2+) and voltage in SW620 cells. Knockdown of TMEM16A by short hairpin RNAs (shRNA) resulted in the suppression of growth, migration and invasion of SW620 cells as detected by MTT, wound-healing and transwell assays. Mechanistically, TMEM16A depletion was accompanied by the dysregulation of phospho-MEK, phospho-ERK1/2 and cyclin D1 expression. Flow cytometry analysis showed that SW620 cells were inhibited from the G1 to S phase of the cell cycle in the TMEM16A shRNA group compared with the control group. In conclusion, our results indicate that TMEM16A CaCC is involved in growth, migration and invasion of metastatic CRC cells and provide evidence for TMEM16A as a potential drug target for treating metastatic colorectal carcinoma.",
"title": ""
},
{
"docid": "85c687f7b01d635fa9f46d0dd61098d3",
"text": "This paper provides a comprehensive survey of the technical achievements in the research area of Image Retrieval , especially Content-Based Image Retrieval, an area so active and prosperous in the past few years. The survey includes 100+ papers covering the research aspects of image feature representation and extraction, multi-dimensional indexing, and system design, three of the fundamental bases of Content-Based Image Retrieval. Furthermore, based on the state-of-the-art technology available now and the demand from real-world applications, open research issues are identiied, and future promising research directions are suggested.",
"title": ""
},
{
"docid": "6a2fa5998bf51eb40c1fd2d8f3dd8277",
"text": "In this paper, we propose a new descriptor for texture classification that is robust to image blurring. The descriptor utilizes phase information computed locally in a window for every image position. The phases of the four low-frequency coefficients are decorrelated and uniformly quantized in an eight-dimensional space. A histogram of the resulting code words is created and used as a feature in texture classification. Ideally, the low-frequency phase components are shown to be invariant to centrally symmetric blur. Although this ideal invariance is not completely achieved due to the finite window size, the method is still highly insensitive to blur. Because only phase information is used, the method is also invariant to uniform illumination changes. According to our experiments, the classification accuracy of blurred texture images is much higher with the new method than with the well-known LBP or Gabor filter bank methods. Interestingly, it is also slightly better for textures that are not blurred.",
"title": ""
},
{
"docid": "e6457f5257e95d727e06e212bef2f488",
"text": "The emerging ability to comply with caregivers' dictates and to monitor one's own behavior accordingly signifies a major growth of early childhood. However, scant attention has been paid to the developmental course of self-initiated regulation of behavior. This article summarizes the literature devoted to early forms of control and highlights the different philosophical orientations in the literature. Then, focusing on the period from early infancy to the beginning of the preschool years, the author proposes an ontogenetic perspective tracing the kinds of modulation or control the child is capable of along the way. The developmental sequence of monitoring behaviors that is proposed calls attention to contributions made by the growth of cognitive skills. The role of mediators (e.g., caregivers) is also discussed.",
"title": ""
},
{
"docid": "81b6059f24c827c271247b07f38f86d5",
"text": "We present a single-chip fully compliant Bluetooth radio fabricated in a digital 130-nm CMOS process. The transceiver is architectured from the ground up to be compatible with digital deep-submicron CMOS processes and be readily integrated with a digital baseband and application processor. The conventional RF frequency synthesizer architecture, based on the voltage-controlled oscillator and the phase/frequency detector and charge-pump combination, has been replaced with a digitally controlled oscillator and a time-to-digital converter, respectively. The transmitter architecture takes advantage of the wideband frequency modulation capability of the all-digital phase-locked loop with built-in automatic compensation to ensure modulation accuracy. The receiver employs a discrete-time architecture in which the RF signal is directly sampled and processed using analog and digital signal processing techniques. The complete chip also integrates power management functions and a digital baseband processor. Application of the presented ideas has resulted in significant area and power savings while producing structures that are amenable to migration to more advanced deep-submicron processes, as they become available. The entire IC occupies 10 mm/sup 2/ and consumes 28 mA during transmit and 41 mA during receive at 1.5-V supply.",
"title": ""
},
{
"docid": "4765f21109d36fb2631325fd0442aeac",
"text": "The functions of rewards are based primarily on their effects on behavior and are less directly governed by the physics and chemistry of input events as in sensory systems. Therefore, the investigation of neural mechanisms underlying reward functions requires behavioral theories that can conceptualize the different effects of rewards on behavior. The scientific investigation of behavioral processes by animal learning theory and economic utility theory has produced a theoretical framework that can help to elucidate the neural correlates for reward functions in learning, goal-directed approach behavior, and decision making under uncertainty. Individual neurons can be studied in the reward systems of the brain, including dopamine neurons, orbitofrontal cortex, and striatum. The neural activity can be related to basic theoretical terms of reward and uncertainty, such as contiguity, contingency, prediction error, magnitude, probability, expected value, and variance.",
"title": ""
},
{
"docid": "2ff3238a25fd7055517a2596e5e0cd7c",
"text": "Previous approaches for scene text detection have already achieved promising performances across various benchmarks. However, they usually fall short when dealing with challenging scenarios, even when equipped with deep neural network models, because the overall performance is determined by the interplay of multiple stages and components in the pipelines. In this work, we propose a simple yet powerful pipeline that yields fast and accurate text detection in natural scenes. The pipeline directly predicts words or text lines of arbitrary orientations and quadrilateral shapes in full images, eliminating unnecessary intermediate steps (e.g., candidate aggregation and word partitioning), with a single neural network. The simplicity of our pipeline allows concentrating efforts on designing loss functions and neural network architecture. Experiments on standard datasets including ICDAR 2015, COCO-Text and MSRA-TD500 demonstrate that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both accuracy and efficiency. On the ICDAR 2015 dataset, the proposed algorithm achieves an F-score of 0.7820 at 13.2fps at 720p resolution.",
"title": ""
},
{
"docid": "abdd0d2c13c884b22075b2c3f54a0dfc",
"text": "Global clock distribution for multi-GHz microprocessors has become increasingly difficult and time-consuming to design. As the frequency of the global clock continues to increase, the timing uncertainty introduced by the clock network − the skew and jitter − must reduce proportional to the clock period. However, the clock skew and jitter for conventional, buffered H-trees are proportional to latency, which has increased for recent generations of microprocessors. A global clock network that uses standing waves and coupled oscillators has the potential to significantly reduce both skew and jitter. Standing waves have the unique property that phase does not depend on position, meaning that there is ideally no skew. They have previously been used for board-level clock distribution, on coaxial cables, and on superconductive wires but have never been implemented on-chip due to the large losses of on-chip interconnects. Networks of coupled oscillators have a phase-averaging effect that reduces both skew and jitter. However, none of the previous implementations of coupled-oscillator clock networks use standing waves and some require considerable circuitry to couple the oscillators. In this thesis, a global clock network that incorporates standing waves and coupled oscillators to distribute a high-frequency clock signal with low skew and low jitter is",
"title": ""
},
{
"docid": "fa9a2112a687063c2fb3733af7b1ea61",
"text": "Prediction without justification has limited utility. Much of the success of neural models can be attributed to their ability to learn rich, dense and expressive representations. While these representations capture the underlying complexity and latent trends in the data, they are far from being interpretable. We propose a novel variant of denoising k-sparse autoencoders that generates highly efficient and interpretable distributed word representations (word embeddings), beginning with existing word representations from state-of-the-art methods like GloVe and word2vec. Through large scale human evaluation, we report that our resulting word embedddings are much more interpretable than the original GloVe and word2vec embeddings. Moreover, our embeddings outperform existing popular word embeddings on a diverse suite of benchmark downstream tasks.",
"title": ""
},
{
"docid": "95c4a2cfd063abdac35572927c4dcfc1",
"text": "Community detection is an important task in network analysis. A community (also referred to as a cluster) is a set of cohesive vertices that have more connections inside the set than outside. In many social and information networks, these communities naturally overlap. For instance, in a social network, each vertex in a graph corresponds to an individual who usually participates in multiple communities. In this paper, we propose an efficient overlapping community detection algorithm using a seed expansion approach. The key idea of our algorithm is to find good seeds, and then greedily expand these seeds based on a community metric. Within this seed expansion method, we investigate the problem of how to determine good seed nodes in a graph. In particular, we develop new seeding strategies for a personalized PageRank clustering scheme that optimizes the conductance community score. An important step in our method is the neighborhood inflation step where seeds are modified to represent their entire vertex neighborhood. Experimental results show that our seed expansion algorithm outperforms other state-of-the-art overlapping community detection methods in terms of producing cohesive clusters and identifying ground-truth communities. We also show that our new seeding strategies are better than existing strategies, and are thus effective in finding good overlapping communities in real-world networks.",
"title": ""
}
] |
scidocsrr
|
4ff724ea016ef2a5bbf496abbe3bcd45
|
Implications of Placebo and Nocebo Effects for Clinical Practice: Expert Consensus.
|
[
{
"docid": "d501d2758e600c307e41a329222bf7d6",
"text": "Placebo effects are beneficial effects that are attributable to the brain–mind responses to the context in which a treatment is delivered rather than to the specific actions of the drug. They are mediated by diverse processes — including learning, expectations and social cognition — and can influence various clinical and physiological outcomes related to health. Emerging neuroscience evidence implicates multiple brain systems and neurochemical mediators, including opioids and dopamine. We present an empirical review of the brain systems that are involved in placebo effects, focusing on placebo analgesia, and a conceptual framework linking these findings to the mind–brain processes that mediate them. This framework suggests that the neuropsychological processes that mediate placebo effects may be crucial for a wide array of therapeutic approaches, including many drugs.",
"title": ""
}
] |
[
{
"docid": "396dd0517369d892d249bb64fa410128",
"text": "Within the philosophy of language, pragmatics has tended to be seen as an adjunct to, and a means of solving problems in, semantics. A cognitive-scientific conception of pragmatics as a mental processing system responsible for interpreting ostensive communicative stimuli (specifically, verbal utterances) has effected a transformation in the pragmatic issues pursued and the kinds of explanation offered. Taking this latter perspective, I compare two distinct proposals on the kinds of processes, and the architecture of the system(s), responsible for the recovery of speaker meaning (both explicitly and implicitly communicated meaning). 1. Pragmatics as a Cognitive System 1.1 From Philosophy of Language to Cognitive Science Broadly speaking, there are two perspectives on pragmatics: the ‘philosophical’ and the ‘cognitive’. From the philosophical perspective, an interest in pragmatics has been largely motivated by problems and issues in semantics. A familiar instance of this was Grice’s concern to maintain a close semantic parallel between logical operators and their natural language counterparts, such as ‘not’, ‘and’, ‘or’, ‘if’, ‘every’, ‘a/some’, and ‘the’, in the face of what look like quite major divergences in the meaning of the linguistic elements (see Grice 1975, 1981). The explanation he provided was pragmatic, i.e. in terms of what occurs when the logical semantics of these terms is put to rational communicative use. Consider the case of ‘and’: (1) a. Mary went to a movie and Sam read a novel. b. She gave him her key and he opened the door. c. She insulted him and he left the room. While (a) seems to reflect the straightforward truth-functional symmetrical connection, (b) and (c) communicate a stronger asymmetric relation: temporal Many thanks to Richard Breheny, Sam Guttenplan, Corinne Iten, Deirdre Wilson and Vladimir Zegarac for helpful comments and support during the writing of this paper. Address for correspondence: Department of Phonetics & Linguistics, University College London, Gower Street, London WC1E 6BT, UK. Email: robyn linguistics.ucl.ac.uk Mind & Language, Vol. 17 Nos 1 and 2 February/April 2002, pp. 127–148. Blackwell Publishers Ltd. 2002, 108 Cowley Road, Oxford, OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA.",
"title": ""
},
{
"docid": "9a2d79d9df9e596e26f8481697833041",
"text": "Novelty search is a recent artificial evolution technique that challenges traditional evolutionary approaches. In novelty search, solutions are rewarded based on their novelty, rather than their quality with respect to a predefined objective. The lack of a predefined objective precludes premature convergence caused by a deceptive fitness function. In this paper, we apply novelty search combined with NEAT to the evolution of neural controllers for homogeneous swarms of robots. Our empirical study is conducted in simulation, and we use a common swarm robotics task—aggregation, and a more challenging task—sharing of an energy recharging station. Our results show that novelty search is unaffected by deception, is notably effective in bootstrapping evolution, can find solutions with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task. Even in non-deceptive setups, novelty search achieves solution qualities similar to those obtained in traditional fitness-based evolution. Our study also encompasses variants of novelty search that work in concert with fitness-based evolution to combine the exploratory character of novelty search with the exploitatory character of objective-based evolution. We show that these variants can further improve the performance of novelty search. Overall, our study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms.",
"title": ""
},
{
"docid": "02a6e024c1d318862ad4c17b9a56ca36",
"text": "Artificial food colors (AFCs) have not been established as the main cause of attention-deficit hyperactivity disorder (ADHD), but accumulated evidence suggests that a subgroup shows significant symptom improvement when consuming an AFC-free diet and reacts with ADHD-type symptoms on challenge with AFCs. Of children with suspected sensitivities, 65% to 89% reacted when challenged with at least 100 mg of AFC. Oligoantigenic diet studies suggested that some children in addition to being sensitive to AFCs are also sensitive to common nonsalicylate foods (milk, chocolate, soy, eggs, wheat, corn, legumes) as well as salicylate-containing grapes, tomatoes, and orange. Some studies found \"cosensitivity\" to be more the rule than the exception. Recently, 2 large studies demonstrated behavioral sensitivity to AFCs and benzoate in children both with and without ADHD. A trial elimination diet is appropriate for children who have not responded satisfactorily to conventional treatment or whose parents wish to pursue a dietary investigation.",
"title": ""
},
{
"docid": "92d858331c4da840c0223a7031a02280",
"text": "The advent use of new online social media such as articles, blogs, message boards, news channels, and in general Web content has dramatically changed the way people look at various things around them. Today, it’s a daily practice for many people to read news online. People's perspective tends to undergo a change as per the news content they read. The majority of the content that we read today is on the negative aspects of various things e.g. corruption, rapes, thefts etc. Reading such news is spreading negativity amongst the people. Positive news seems to have gone into a hiding. The positivity surrounding the good news has been drastically reduced by the number of bad news. This has made a great practical use of Sentiment Analysis and there has been more innovation in this area in recent days. Sentiment analysis refers to a broad range of fields of text mining, natural language processing and computational linguistics. It traditionally emphasizes on classification of text document into positive and negative categories. Sentiment analysis of any text document has emerged as the most useful application in the area of sentiment analysis. The objective of this project is to provide a platform for serving good news and create a positive environment. This is achieved by finding the sentiments of the news articles and filtering out the negative articles. This would enable us to focus only on the good news which will help spread positivity around and would allow people to think positively.",
"title": ""
},
{
"docid": "73b150681d7de50ada8e046a3027085f",
"text": "We introduce a new model, the Recurrent Entity Network (EntNet). It is equipped with a dynamic long-term memory which allows it to maintain and update a representation of the state of the world as it receives new data. For language understanding tasks, it can reason on-the-fly as it reads text, not just when it is required to answer a question or respond as is the case for a Memory Network (Sukhbaatar et al., 2015). Like a Neural Turing Machine or Differentiable Neural Computer (Graves et al., 2014; 2016) it maintains a fixed size memory and can learn to perform location and content-based read and write operations. However, unlike those models it has a simple parallel architecture in which several memory locations can be updated simultaneously. The EntNet sets a new state-of-the-art on the bAbI tasks, and is the first method to solve all the tasks in the 10k training examples setting. We also demonstrate that it can solve a reasoning task which requires a large number of supporting facts, which other methods are not able to solve, and can generalize past its training horizon. It can also be practically used on large scale datasets such as Children’s Book Test, where it obtains competitive performance, reading the story in a single pass.",
"title": ""
},
{
"docid": "72939e7f99727408acfa3dd3977b38ad",
"text": "We present a method for generating colored 3D shapes from natural language. To this end, we first learn joint embeddings of freeform text descriptions and colored 3D shapes. Our model combines and extends learning by association and metric learning approaches to learn implicit cross-modal connections, and produces a joint representation that captures the many-to-many relations between language and physical properties of 3D shapes such as color and shape. To evaluate our approach, we collect a large dataset of natural language descriptions for physical 3D objects in the ShapeNet dataset. With this learned joint embedding we demonstrate text-to-shape retrieval that outperforms baseline approaches. Using our embeddings with a novel conditional Wasserstein GAN framework, we generate colored 3D shapes from text. Our method is the first to connect natural language text with realistic 3D objects exhibiting rich variations in color, texture, and shape detail.",
"title": ""
},
{
"docid": "61dcc07734c98bf0ad01a98fe0c55bf4",
"text": "The system includes terminal fingerprint acquisitio n module and attendance module. It can realize automatically such functions as information acquisi tion of fingerprint, processing, and wireless trans mission, fingerprint matching and making an attendance repor t. After taking the attendance, this system sends t he attendance of every student to their parent’s mobil e through GSM and also stored the attendance of res pective student to calculate the percentage of attendance a d alerts to class in charge. Attendance system fac ilitates access to the attendance of a particular student in a particular class. This system eliminates the nee d for stationary materials and personnel for the keeping of records and efforts of class in charge.",
"title": ""
},
{
"docid": "18fcdcadc3290f9c8dd09f0aa1a27e8f",
"text": "The Industry 4.0 is a vision that includes connecting more intensively physical systems with their virtual counterparts in computers. This computerization of manufacturing will bring many advantages, including allowing data gathering, integration and analysis in the scale not seen earlier. In this paper we describe our Semantic Big Data Historian that is intended to handle large volumes of heterogeneous data gathered from distributed data sources. We describe the approach and implementation with a special focus on using Semantic Web technologies for integrating the data.",
"title": ""
},
{
"docid": "1306d2579c8af0af65805da887d283b0",
"text": "Traceability allows tracking products through all stages of a supply chain, which is crucial for product quality control. To provide accountability and forensic information, traceability information must be secured. This is challenging because traceability systems often must adapt to changes in regulations and to customized traceability inspection processes. OriginChain is a real-world traceability system using a blockchain. Blockchains are an emerging data storage technology that enables new forms of decentralized architectures. Components can agree on their shared states without trusting a central integration point. OriginChain’s architecture provides transparent tamper-proof traceability information, automates regulatory compliance checking, and enables system adaptability.",
"title": ""
},
{
"docid": "62425652b113c72c668cf9c73b7c8480",
"text": "Knowledge graph (KG) completion aims to fill the missing facts in a KG, where a fact is represented as a triple in the form of (subject, relation, object). Current KG completion models compel twothirds of a triple provided (e.g., subject and relation) to predict the remaining one. In this paper, we propose a new model, which uses a KGspecific multi-layer recurrent neutral network (RNN) to model triples in a KG as sequences. It outperformed several state-of-the-art KG completion models on the conventional entity prediction task for many evaluation metrics, based on two benchmark datasets and a more difficult dataset. Furthermore, our model is enabled by the sequential characteristic and thus capable of predicting the whole triples only given one entity. Our experiments demonstrated that our model achieved promising performance on this new triple prediction task.",
"title": ""
},
{
"docid": "c5f3d1465ad3cec4578fe64e57caee66",
"text": "Clustering of subsequence time series remains an open issue in time series clustering. Subsequence time series clustering is used in different fields, such as e-commerce, outlier detection, speech recognition, biological systems, DNA recognition, and text mining. One of the useful fields in the domain of subsequence time series clustering is pattern recognition. To improve this field, a sequence of time series data is used. This paper reviews some definitions and backgrounds related to subsequence time series clustering. The categorization of the literature reviews is divided into three groups: preproof, interproof, and postproof period. Moreover, various state-of-the-art approaches in performing subsequence time series clustering are discussed under each of the following categories. The strengths and weaknesses of the employed methods are evaluated as potential issues for future studies.",
"title": ""
},
{
"docid": "717489cca3ec371bfcc66efd1e6c1185",
"text": "This paper presents an algorithm for calibrating strapdown magnetometers in the magnetic field domain. In contrast to the traditional method of compass swinging, which computes a series of heading correction parameters and, thus, is limited to use with two-axis systems, this algorithm estimates magnetometer output errors directly. Therefore, this new algorithm can be used to calibrate a full three-axis magnetometer triad. The calibration algorithm uses an iterated, batch least squares estimator which is initialized using a novel two-step nonlinear estimator. The algorithm is simulated to validate convergence characteristics and further validated on experimental data collected using a magnetometer triad. It is shown that the post calibration residuals are small and result in a system with heading errors on the order of 1 to 2 degrees.",
"title": ""
},
{
"docid": "33da6c0dcc2e62059080a4b1d220ef8b",
"text": "We generalize the scattering transform to graphs and consequently construct a convolutional neural network on graphs. We show that under certain conditions, any feature generated by such a network is approximately invariant to permutations and stable to graph manipulations. Numerical results demonstrate competitive performance on relevant datasets.",
"title": ""
},
{
"docid": "db225a91f1730d69aaf47a7c1b20dbc4",
"text": "We describe how malicious customers can attack the availability of Content Delivery Networks (CDNs) by creating forwarding loops inside one CDN or across multiple CDNs. Such forwarding loops cause one request to be processed repeatedly or even indefinitely, resulting in undesired resource consumption and potential Denial-of-Service attacks. To evaluate the practicality of such forwarding-loop attacks, we examined 16 popular CDN providers and found all of them are vulnerable to some form of such attacks. While some CDNs appear to be aware of this threat and have adopted specific forwarding-loop detection mechanisms, we discovered that they can all be bypassed with new attack techniques. Although conceptually simple, a comprehensive defense requires collaboration among all CDNs. Given that hurdle, we also discuss other mitigations that individual CDN can implement immediately. At a higher level, our work underscores the hazards that can arise when a networked system provides users with control over forwarding, particularly in a context that lacks a single point of administrative control.",
"title": ""
},
{
"docid": "45372f5c198270179a30e3ea46a9d44d",
"text": "OBJECTIVE\nThe objective of this study was to investigate the relationship between breakfast type, energy intake and body mass index (BMI). We hypothesized not only that breakfast consumption itself is associated with BMI, but that the type of food eaten at breakfast also affects BMI.\n\n\nMETHODS\nData from the Third National Health and Nutrition Examination Survey (NHANES III), a large, population-based study conducted in the United States from 1988 to 1994, were analyzed for breakfast type, total daily energy intake, and BMI. The analyzed breakfast categories were \"Skippers,\" \"Meat/eggs,\" \"Ready-to-eat cereal (RTEC),\" \"Cooked cereal,\" \"Breads,\" \"Quick Breads,\" \"Fruits/vegetables,\" \"Dairy,\" \"Fats/sweets,\" and \"Beverages.\" Analysis of covariance was used to estimate adjusted mean body mass index (BMI) and energy intake (kcal) as dependent variables. Covariates included age, gender, race, smoking, alcohol intake, physical activity and poverty index ratio.\n\n\nRESULTS\nSubjects who ate RTEC, Cooked cereal, or Quick Breads for breakfast had significantly lower BMI compared to Skippers and Meat and Egg eaters (p < or = 0.01). Breakfast skippers and fruit/vegetable eaters had the lowest daily energy intake. The Meat and Eggs eaters had the highest daily energy intake and one of the highest BMIs.\n\n\nCONCLUSIONS\nThis analysis provides evidence that skipping breakfast is not an effective way to manage weight. Eating cereal (ready-to-eat or cooked cereal) or quick breads for breakfast is associated with significantly lower body mass index compared to skipping breakfast or eating meats and/or eggs for breakfast.",
"title": ""
},
{
"docid": "61406f27199acc5f034c2721d66cda89",
"text": "Fischler PER •Sequence of tokens mapped to word embeddings. •Bidirectional LSTM builds context-dependent representations for each word. •A small feedforward layer encourages generalisation. •Conditional Random Field (CRF) at the top outputs the most optimal label sequence for the sentence. •Using character-based dynamic embeddings (Rei et al., 2016) to capture morphological patterns and unseen words.",
"title": ""
},
{
"docid": "5bf9aeb37fc1a82420b2ff4136f547d0",
"text": "Visual Question Answering (VQA) is a popular research problem that involves inferring answers to natural language questions about a given visual scene. Recent neural network approaches to VQA use attention to select relevant image features based on the question. In this paper, we propose a novel Dual Attention Network (DAN) that not only attends to image features, but also to question features. The selected linguistic and visual features are combined by a recurrent model to infer the final answer. We experiment with different question representations and do several ablation studies to evaluate the model on the challenging VQA dataset.",
"title": ""
},
{
"docid": "e144d8c0f046ad6cd2e5c71844b2b532",
"text": "Photogrammetry is the traditional method of surface reconstruction such as the generation of DTMs. Recently, LIDAR emerged as a new technology for rapidly capturing data on physical surfaces. The high accuracy and automation potential results in a quick delivery of DEMs/DTMs derived from the raw laser data. The two methods deliver complementary surface information. Thus it makes sense to combine data from the two sensors to arrive at a more robust and complete surface reconstruction. This paper describes two aspects of merging aerial imagery and LIDAR data. The establishment of a common reference frame is an absolute prerequisite. We solve this alignment problem by utilizing sensor-invariant features. Such features correspond to the same object space phenomena, for example to breaklines and surface patches. Matched sensor invariant features lend themselves to establishing a common reference frame. Feature-level fusion is performed with sensor specific features that are related to surface characteristics. We show the synergism between these features resulting in a richer and more abstract surface description.",
"title": ""
},
{
"docid": "57e2adea74edb5eaf5b2af00ab3c625e",
"text": "Although scholars agree that moral emotions are critical for deterring unethical and antisocial behavior, there is disagreement about how 2 prototypical moral emotions--guilt and shame--should be defined, differentiated, and measured. We addressed these issues by developing a new assessment--the Guilt and Shame Proneness scale (GASP)--that measures individual differences in the propensity to experience guilt and shame across a range of personal transgressions. The GASP contains 2 guilt subscales that assess negative behavior-evaluations and repair action tendencies following private transgressions and 2 shame subscales that assess negative self-evaluations (NSEs) and withdrawal action tendencies following publically exposed transgressions. Both guilt subscales were highly correlated with one another and negatively correlated with unethical decision making. Although both shame subscales were associated with relatively poor psychological functioning (e.g., neuroticism, personal distress, low self-esteem), they were only weakly correlated with one another, and their relationships with unethical decision making diverged. Whereas shame-NSE constrained unethical decision making, shame-withdraw did not. Our findings suggest that differentiating the tendency to make NSEs following publically exposed transgressions from the tendency to hide or withdraw from public view is critically important for understanding and measuring dispositional shame proneness. The GASP's ability to distinguish these 2 classes of responses represents an important advantage of the scale over existing assessments. Although further validation research is required, the present studies are promising in that they suggest the GASP has the potential to be an important measurement tool for detecting individuals susceptible to corruption and unethical behavior.",
"title": ""
},
{
"docid": "b1b56020802d11d1f5b2badb177b06b9",
"text": "The explosive growth of the world-wide-web and the emergence of e-commerce has led to the development of recommender systems--a personalized information filtering technology used to identify a set of N items that will be of interest to a certain user. User-based and model-based collaborative filtering are the most successful technology for building recommender systems to date and is extensively used in many commercial recommender systems. The basic assumption in these algorithms is that there are sufficient historical data for measuring similarity between products or users. However, this assumption does not hold in various application domains such as electronics retail, home shopping network, on-line retail where new products are introduced and existing products disappear from the catalog. Another such application domains is home improvement retail industry where a lot of products (such as window treatments, bathroom, kitchen or deck) are custom made. Each product is unique and there are very little duplicate products. In this domain, the probability of the same exact two products bought together is close to zero. In this paper, we discuss the challenges of providing recommendation in the domains where no sufficient historical data exist for measuring similarity between products or users. We present feature-based recommendation algorithms that overcome the limitations of the existing top-n recommendation algorithms. The experimental evaluation of the proposed algorithms in the real life data sets shows a great promise. The pilot project deploying the proposed feature-based recommendation algorithms in the on-line retail web site shows 75% increase in the recommendation revenue for the first 2 month period.",
"title": ""
}
] |
scidocsrr
|
9c0b045e65668e43100e9d2103d0d65b
|
Bayesian nonparametric sparse seemingly unrelated regression model ( SUR ) ∗
|
[
{
"docid": "660f957b70e53819724e504ed3de0776",
"text": "We propose several econometric measures of connectedness based on principalcomponents analysis and Granger-causality networks, and apply them to the monthly returns of hedge funds, banks, broker/dealers, and insurance companies. We find that all four sectors have become highly interrelated over the past decade, likely increasing the level of systemic risk in the finance and insurance industries through a complex and time-varying network of relationships. These measures can also identify and quantify financial crisis periods, and seem to contain predictive power in out-of-sample tests. Our results show an asymmetry in the degree of connectedness among the four sectors, with banks playing a much more important role in transmitting shocks than other financial institutions. & 2011 Elsevier B.V. All rights reserved.",
"title": ""
}
] |
[
{
"docid": "981d140731d8a3cdbaebacc1fd26484a",
"text": "A new wideband bandpass filter (BPF) with composite short- and open-circuited stubs has been proposed in this letter. With the two kinds of stubs, two pairs of transmission zeros (TZs) can be produced on the two sides of the desired passband. The even-/odd-mode analysis method is used to derive the input admittances of its bisection circuits. After the Richard's transformation, these bisection circuits are in the same format of two LC circuits. By combining these two LC circuits, the equivalent circuit of the proposed filter is obtained. Through the analysis of the equivalent circuit, the open-circuited stubs introduce transmission poles in the complex frequencies and one pair of TZs in the real frequencies, and the short-circuited stubs generate one pair of TZs to block the dc component. A wideband BPF is designed and fabricated to verify the proposed design principle.",
"title": ""
},
{
"docid": "fc6726bddf3d70b7cb3745137f4583c1",
"text": "Maximum power point tracking (MPPT) is a very important necessity in a system of energy conversion from a renewable energy source. Many research papers have been produced with various schemes over past decades for the MPPT in photovoltaic (PV) system. This research paper inspires its motivation from the fact that the keen study of these existing techniques reveals that there is still quite a need for an absolutely generic and yet very simple MPPT controller which should have all the following traits: total independence from system's parameters, ability to reach the global maxima in minimal possible steps, the correct sense of tracking direction despite the abrupt atmospheric or parametrical changes, and finally having a very cost-effective and energy efficient hardware with the complexity no more than that of a minimal MPPT algorithm like Perturb and Observe (P&O). The MPPT controller presented in this paper is a successful attempt to fulfil all these requirements. It extends the MPPT techniques found in the recent research papers with some innovations in the control algorithm and a simplistic hardware. The simulation results confirm that the proposed MPPT controller is very fast, very efficient, very simple and low cost as compared to the contemporary ones.",
"title": ""
},
{
"docid": "bfa2f3edf0bd1c27bfe3ab90dde6fd75",
"text": "Sophorolipids are biosurfactants belonging to the class of the glycolipid, produced mainly by the osmophilic yeast Candida bombicola. Structurally they are composed by a disaccharide sophorose (2’-O-β-D-glucopyranosyl-β-D-glycopyranose) which is linked β -glycosidically to a long fatty acid chain with generally 16 to 18 atoms of carbon with one or more unsaturation. They are produced as a complex mix containing up to 40 molecules and associated isomers, depending on the species which produces it, the substrate used and the culture conditions. They present properties which are very similar or superior to the synthetic surfactants and other biosurfactants with the advantage of presenting low toxicity, higher biodegradability, better environmental compatibility, high selectivity and specific activity in a broad range of temperature, pH and salinity conditions. Its biological activities are directly related with its chemical structure. Sophorolipids possess a great potential for application in areas such as: food; bioremediation; cosmetics; pharmaceutical; biomedicine; nanotechnology and enhanced oil recovery.",
"title": ""
},
{
"docid": "dded827d0b9c513ad504663547018749",
"text": "In this paper, various key points in the rotor design of a low-cost permanent-magnet-assisted synchronous reluctance motor (PMa-SynRM) are introduced and their effects are studied. Finite-element approach has been utilized to show the effects of these parameters on the developed average electromagnetic torque and total d-q inductances. One of the features considered in the design of this motor is the magnetization of the permanent magnets mounted in the rotor core using the stator windings. This feature will cause a reduction in cost and ease of manufacturing. Effectiveness of the design procedure is validated by presenting simulation and experimental results of a 1.5-kW prototype PMa-SynRM",
"title": ""
},
{
"docid": "3c66b3db12b8695863d79f4edcc74103",
"text": "Several studies have found that women tend to demonstrate stronger preferences for masculine men as short-term partners than as long-term partners, though there is considerable variation among women in the magnitude of this effect. One possible source of this variation is individual differences in the extent to which women perceive masculine men to possess antisocial traits that are less costly in short-term relationships than in long-term relationships. Consistent with this proposal, here we show that the extent to which women report stronger preferences for men with low (i.e., masculine) voice pitch as short-term partners than as long-term partners is associated with the extent to which they attribute physical dominance and low trustworthiness to these masculine voices. Thus, our findings suggest that variation in the extent to which women attribute negative personality characteristics to masculine men predicts individual differences in the magnitude of the effect of relationship context on women's masculinity preferences, highlighting the importance of perceived personality attributions for individual differences in women's judgments of men's vocal attractiveness and, potentially, their mate preferences.",
"title": ""
},
{
"docid": "375d5fcb41b7fb3a2f60822720608396",
"text": "We present a full-stack design to accelerate deep learning inference with FPGAs. Our contribution is two-fold. At the software layer, we leverage and extend TVM, the end-to-end deep learning optimizing compiler, in order to harness FPGA-based acceleration. At the the hardware layer, we present the Versatile Tensor Accelerator (VTA) which presents a generic, modular, and customizable architecture for TPU-like accelerators. Our results take a ResNet-18 description in MxNet and compiles it down to perform 8-bit inference on a 256-PE accelerator implemented on a low-cost Xilinx Zynq FPGA, clocked at 100MHz. Our full hardware acceleration stack will be made available for the community to reproduce, and build upon at http://github.com/uwsaml/vta.",
"title": ""
},
{
"docid": "bf7b3cdb178fd1969257f56c0770b30b",
"text": "Relation Extraction is an important subtask of Information Extraction which has the potential of employing deep learning (DL) models with the creation of large datasets using distant supervision. In this review, we compare the contributions and pitfalls of the various DL models that have been used for the task, to help guide the path ahead.",
"title": ""
},
{
"docid": "3ed0e387f8e6a8246b493afbb07a9312",
"text": "Van den Ende-Gupta Syndrome (VDEGS) is an autosomal recessive disorder characterized by blepharophimosis, distinctive nose, hypoplastic maxilla, and skeletal abnormalities. Using homozygosity mapping in four VDEGS patients from three consanguineous families, Anastacio et al. [Anastacio et al. (2010); Am J Hum Genet 87:553-559] identified homozygous mutations in SCARF2, located at 22q11.2. Bedeschi et al. [2010] described a VDEGS patient with sclerocornea and cataracts with compound heterozygosity for the common 22q11.2 microdeletion and a hemizygous SCARF2 mutation. Because sclerocornea had been described in DiGeorge-velo-cardio-facial syndrome but not in VDEGS, they suggested that the ocular abnormalities were caused by the 22q11.2 microdeletion. We report on a 23-year-old male who presented with bilateral sclerocornea and the VDGEGS phenotype who was subsequently found to be homozygous for a 17 bp deletion in exon 4 of SCARF2. The occurrence of bilateral sclerocornea in our patient together with that of Bedeschi et al., suggests that the full VDEGS phenotype may include sclerocornea resulting from homozygosity or compound heterozygosity for loss of function variants in SCARF2.",
"title": ""
},
{
"docid": "269e1c0d737beafd10560360049c6ee3",
"text": "There is no doubt that Social media has gained wider acceptability and usability and is also becoming probably the most important communication tools among students especially at the higher level of educational pursuit. As much as social media is viewed as having bridged the gap in communication that existed. Within the social media Facebook, Twitter and others are now gaining more and more patronage. These websites and social forums are way of communicating directly with other people socially. Social media has the potentials of influencing decision-making in a very short time regardless of the distance. On the bases of its influence, benefits and demerits this study is carried out in order to highlight the potentials of social media in the academic setting by collaborative learning and improve the students' academic performance. The results show that collaborative learning positively and significantly with interactive with peers, interactive with teachers and engagement which impact the students’ academic performance.",
"title": ""
},
{
"docid": "066fdb2deeca1d13218f16ad35fe5f86",
"text": "As manga (Japanese comics) have become common content in many countries, it is necessary to search manga by text query or translate them automatically. For these applications, we must first extract texts from manga. In this paper, we develop a method to detect text regions in manga. Taking motivation from methods used in scene text detection, we propose an approach using classifiers for both connected components and regions. We have also developed a text region dataset of manga, which enables learning and detailed evaluations of methods used to detect text regions. Experiments using the dataset showed that our text detection method performs more effectively than existing methods.",
"title": ""
},
{
"docid": "1010eaf1bb85e14c31b5861367115b32",
"text": "This letter presents a design of a dual-orthogonal, linear polarized antenna for the UWB-IR technology in the frequency range from 3.1 to 10.6 GHz. The antenna is compact with dimensions of 40 times 40 mm of the radiation plane, which is orthogonal to the radiation direction. Both the antenna and the feeding network are realized in planar technology. The radiation principle and the computed design are verified by a prototype. The input impedance matching is better than -6 dB. The measured results show a mean gain in copolarization close to 4 dBi. The cross-polarizations suppression w.r.t. the copolarization is better than 20 dB. Due to its features, the antenna is suited for polarimetric ultrawideband (UWB) radar and UWB multiple-input-multiple-output (MIMO) applications.",
"title": ""
},
{
"docid": "580d83a0e627daedb45fe55e3f9b6883",
"text": "With near exponential growth predicted in the number of Internet of Things (IoT) based devices within networked systems there is need of a means of providing their flexible and secure integration. Software Defined Networking (SDN) is a concept that allows for the centralised control and configuration of network devices, and also provides opportunities for the dynamic control of network traffic. This paper proposes the use of an SDN gateway as a distributed means of monitoring the traffic originating from and directed to IoT based devices. This gateway can then both detect anomalous behaviour and perform an appropriate response (blocking, forwarding, or applying Quality of Service). Initial results demonstrate that, while the addition of the attack detection functionality has an impact on the number of flow installations possible per second, it can successfully detect and block TCP and ICMP flood based attacks.",
"title": ""
},
{
"docid": "31a0b00a79496deccdec4d3629fcbf88",
"text": "The Human Papillomavirus (HPV) E6 protein is one of three oncoproteins encoded by the virus. It has long been recognized as a potent oncogene and is intimately associated with the events that result in the malignant conversion of virally infected cells. In order to understand the mechanisms by which E6 contributes to the development of human malignancy many laboratories have focused their attention on identifying the cellular proteins with which E6 interacts. In this review we discuss these interactions in the light of their respective contributions to the malignant progression of HPV transformed cells.",
"title": ""
},
{
"docid": "0d5628aa4bfeefbb6dedfa59325ed517",
"text": "The effects of caffeine on cognition were reviewed based on the large body of literature available on the topic. Caffeine does not usually affect performance in learning and memory tasks, although caffeine may occasionally have facilitatory or inhibitory effects on memory and learning. Caffeine facilitates learning in tasks in which information is presented passively; in tasks in which material is learned intentionally, caffeine has no effect. Caffeine facilitates performance in tasks involving working memory to a limited extent, but hinders performance in tasks that heavily depend on working memory, and caffeine appears to rather improve memory performance under suboptimal alertness conditions. Most studies, however, found improvements in reaction time. The ingestion of caffeine does not seem to affect long-term memory. At low doses, caffeine improves hedonic tone and reduces anxiety, while at high doses, there is an increase in tense arousal, including anxiety, nervousness, jitteriness. The larger improvement of performance in fatigued subjects confirms that caffeine is a mild stimulant. Caffeine has also been reported to prevent cognitive decline in healthy subjects but the results of the studies are heterogeneous, some finding no age-related effect while others reported effects only in one sex and mainly in the oldest population. In conclusion, it appears that caffeine cannot be considered a ;pure' cognitive enhancer. Its indirect action on arousal, mood and concentration contributes in large part to its cognitive enhancing properties.",
"title": ""
},
{
"docid": "a32d80b0b446f91832f92ca68597821d",
"text": "PURPOSE\nWe define the cause of the occurrence of Peyronie's disease.\n\n\nMATERIALS AND METHODS\nClinical evaluation of a large number of patients with Peyronie's disease, while taking into account the pathological and biochemical findings of the penis in patients who have been treated by surgery, has led to an understanding of the relationship of the anatomical structure of the penis to its rigidity during erection, and how the effect of the stress imposed upon those structures during intercourse is modified by the loss of compliance resulting from aging of the collagen composing those structures. Peyronie's disease occurs most frequently in middle-aged men, less frequently in older men and infrequently in younger men who have more elastic tissues. During erection, when full tumescence has occurred and the elastic tissues of the penis have reached the limit of their compliance, the strands of the septum give vertical rigidity to the penis. Bending the erect penis out of column stresses the attachment of the septal strands to the tunica albuginea.\n\n\nRESULTS\nPlaques of Peyronie's disease are found where the strands of the septum are attached in the dorsal or ventral aspect of the penis. The pathological scar in the tunica albuginea of the corpora cavernosa in Peyronie's disease is characterized by excessive collagen accumulation, fibrin deposition and disordered elastic fibers in the plaque.\n\n\nCONCLUSIONS\nWe suggest that Peyronie's disease results from repetitive microvascular injury, with fibrin deposition and trapping in the tissue space that is not adequately cleared during the normal remodeling and repair of the tear in the tunica. Fibroblast activation and proliferation, enhanced vessel permeability and generation of chemotactic factors for leukocytes are stimulated by fibrin deposited in the normal process of wound healing. However, in Peyronie's disease the lesion fails to resolve either due to an inability to clear the original stimulus or due to further deposition of fibrin subsequent to repeated trauma. Collagen is also trapped and pathological fibrosis ensues.",
"title": ""
},
{
"docid": "ce6296ae51be4e6fe3d36be618cdfe75",
"text": "UNLABELLED\nOBJECTIVES. Understanding the factors that promote quality of life in old age has been a staple of social gerontology since its inception and remains a significant theme in aging research. The purpose of this article was to review the state of the science with regard to subjective well-being (SWB) in later life and to identify promising directions for future research.\n\n\nMETHODS\nThis article is based on a review of literature on SWB in aging, sociological, and psychological journals. Although the materials reviewed date back to the early 1960s, the emphasis is on publications in the past decade.\n\n\nRESULTS\nResearch to date paints an effective portrait of the epidemiology of SWB in late life and the factors associated with it. Although the research base is large, causal inferences about the determinants of SWB remain problematic. Two recent contributions to the research base are highlighted as emerging issues: studies of secular trends in SWB and cross-national studies. Discussion. The review ends with discussion of priority issues for future research.",
"title": ""
},
{
"docid": "12271ff5fdaf797a6f1063088d0cf2a6",
"text": "A resonant current-fed push-pull converter with active clamping is investigated for vehicle inverter application in this paper. A communal clamping capacitor and a voltage doubler topology are employed. Moreover, two kinds of resonances to realize ZVS ON of MOSFETs or to remove reverse-recovery problem of secondary diodes are analyzed and an optimized resonance is utilized for volume restriction. At last, a 150W prototype with 10V to 16V input voltage, 360V output voltage is implemented to verify the design.",
"title": ""
},
{
"docid": "373bffe67da88ece93006e97f3ae6c53",
"text": "This paper proposes a novel mains voltage proportional input current control concept eliminating the multiplication of the output voltage controller output and the mains ac phase voltages for the derivation of mains phase current reference values of a three-phase/level/switch pulsewidth-modulated (VIENNA) rectifier system. Furthermore, the concept features low input current ripple amplitude as, e.g., achieved for space-vector modulation, a low amplitude of the third harmonic of the current flowing into the output voltage center point, and a wide range of modulation. The practical realization of the analog control concept as well as experimental results for application with a 5-kW prototype of the pulsewidth-modulated rectifier are presented. Furthermore, a control scheme which relies only on the absolute values of the input phase currents and a modified control scheme which does not require information about the mains phase voltages are presented.",
"title": ""
},
{
"docid": "a6cf168632efb2a4c4a4d91c4161dc24",
"text": "This paper presents a systematic approach to transform various fault models to a unified model such that all faults of interest can be handled in one ATPG run. The fault models that can be transformed include, but are not limited to, stuck-at faults, various types of bridging faults, and cell-internal faults. The unified model is the aggressor-victim type of bridging fault model. Two transformation methods, namely fault-based and pattern-based transformations, are developed for cell-external and cell-internal faults, respectively. With the proposed approach, one can use an ATPG tool for bridging faults to deal with the test generation problems of multiple fault models simultaneously. Hence the total test generation time can be reduced and highly compact test sets can be obtained. Experimental results show that on average 54.94% (16.45%) and 47.22% (17.51%) test pattern volume reductions are achieved compared to the method that deals with the three fault models separately without (with) fault dropping for ISCAS'89 andIWLS'05 circuits, respectively.",
"title": ""
},
{
"docid": "835e07466611989c2a4979a5e087156f",
"text": "AIM\nInternet gaming disorder (IGD) is a serious disorder leading to and maintaining pertinent personal and social impairment. IGD has to be considered in view of heterogeneous and incomplete concepts. We therefore reviewed the scientific literature on IGD to provide an overview focusing on definitions, symptoms, prevalence, and aetiology.\n\n\nMETHOD\nWe systematically reviewed the databases ERIC, PsyARTICLES, PsycINFO, PSYNDEX, and PubMed for the period January 1991 to August 2016, and additionally identified secondary references.\n\n\nRESULTS\nThe proposed definition in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition provides a good starting point for diagnosing IGD but entails some disadvantages. Developing IGD requires several interacting internal factors such as deficient self, mood and reward regulation, problems of decision-making, and external factors such as deficient family background and social skills. In addition, specific game-related factors may promote IGD. Summarizing aetiological knowledge, we suggest an integrated model of IGD elucidating the interplay of internal and external factors.\n\n\nINTERPRETATION\nSo far, the concept of IGD and the pathways leading to it are not entirely clear. In particular, long-term follow-up studies are missing. IGD should be understood as an endangering disorder with a complex psychosocial background.\n\n\nWHAT THIS PAPER ADDS\nIn representative samples of children and adolescents, on average, 2% are affected by Internet gaming disorder (IGD). The mean prevalences (overall, clinical samples included) reach 5.5%. Definitions are heterogeneous and the relationship with substance-related addictions is inconsistent. Many aetiological factors are related to the development and maintenance of IGD. This review presents an integrated model of IGD, delineating the interplay of these factors.",
"title": ""
}
] |
scidocsrr
|
aa0535d08ed619450e3cdee0e847b806
|
Approximate Note Transcription for the Improved Identification of Difficult Chords
|
[
{
"docid": "e8933b0afcd695e492d5ddd9f87aeb81",
"text": "This article proposes a method for the automatic transcription of the melody, bass line, and chords in polyphonic pop music. The method uses a frame-wise pitch-salience estimator as a feature extraction front-end. For the melody and bass-line transcription, this is followed by acoustic modeling of note events and musicological modeling of note transitions. The acoustic models include a model for the target notes (i.e., melody or bass notes) and a background model. The musicological model involves key estimation and note bigrams that determine probabilities for transitions between target notes. A transcription of the melody or the bass line is obtained using Viterbi search via the target and the background note models. The performance of the melody and the bass-line transcription is evaluated using approximately 8.5 hours of realistic polyphonic music. The chord transcription maps the pitch salience estimates to a pitch-class representation and uses trained chord models and chord-transition probabilities to produce a transcription consisting of major and minor triads. For chords, the evaluation material consists of the first eight Beatles albums. The method is computationally efficient and allows causal implementation, so it can process streaming audio. Transcription of music refers to the analysis of an acoustic music signal for producing a parametric representation of the signal. The representation may be a music score with a meticulous arrangement for each instrument or an approximate description of melody and chords in the piece, for example. The latter type of transcription is commonly used in commercial songbooks of pop music and is usually sufficient for musicians or music hobbyists to play the piece. On the other hand, more detailed transcriptions are often employed in classical music to preserve the exact arrangement of the composer.",
"title": ""
}
] |
[
{
"docid": "1f1c4c69a4c366614f0cc9ecc24365ba",
"text": "BACKGROUND\nBurnout is a major issue among medical students. Its general characteristics are loss of interest in study and lack of motivation. A study of the phenomenon must extend beyond the university environment and personality factors to consider whether career choice has a role in the occurrence of burnout.\n\n\nMETHODS\nQuantitative, national survey (n = 733) among medical students, using a 12-item career motivation list compiled from published research results and a pilot study. We measured burnout by the validated Hungarian version of MBI-SS.\n\n\nRESULTS\nThe most significant career choice factor was altruistic motivation, followed by extrinsic motivations: gaining a degree, finding a job, accessing career opportunities. Lack of altruism was found to be a major risk factor, in addition to the traditional risk factors, for cynicism and reduced academic efficacy. Our study confirmed the influence of gender differences on both career choice motivations and burnout.\n\n\nCONCLUSION\nThe structure of career motivation is a major issue in the transformation of the medical profession. Since altruism is a prominent motivation for many women studying medicine, their entry into the profession in increasing numbers may reinforce its traditional character and act against the present trend of deprofessionalization.",
"title": ""
},
{
"docid": "b2cd02622ec0fc29b54e567c7f10a935",
"text": "Performance and high availability have become increasingly important drivers, amongst other drivers, for user retention in the context of web services such as social networks, and web search. Exogenic and/or endogenic factors often give rise to anomalies, making it very challenging to maintain high availability, while also delivering high performance. Given that service-oriented architectures (SOA) typically have a large number of services, with each service having a large set of metrics, automatic detection of anomalies is nontrivial. Although there exists a large body of prior research in anomaly detection, existing techniques are not applicable in the context of social network data, owing to the inherent seasonal and trend components in the time series data. To this end, we developed two novel statistical techniques for automatically detecting anomalies in cloud infrastructure data. Specifically, the techniques employ statistical learning to detect anomalies in both application, and system metrics. Seasonal decomposition is employed to filter the trend and seasonal components of the time series, followed by the use of robust statistical metrics – median and median absolute deviation (MAD) – to accurately detect anomalies, even in the presence of seasonal spikes. We demonstrate the efficacy of the proposed techniques from three different perspectives, viz., capacity planning, user behavior, and supervised learning. In particular, we used production data for evaluation, and we report Precision, Recall, and F-measure in each case.",
"title": ""
},
{
"docid": "69d7ec6fe0f847cebe3d1d0ae721c950",
"text": "Circularly polarized (CP) dielectric resonator antenna (DRA) subarrays have been numerically studied and experimentally verified. Elliptical CP DRA is used as the antenna element, which is excited by either a narrow slot or a probe. The elements are arranged in a 2 by 2 subarray configuration and are excited sequentially. In order to optimize the CP bandwidth, wideband feeding networks have been designed. Three different types of feeding network are studied; they are parallel feeding network, series feeding network and hybrid ring feeding network. For the CP DRA subarray with hybrid ring feeding network, the impedance matching bandwidth (S11<-10 dB) and 3-dB AR bandwidth achieved are 44% and 26% respectively",
"title": ""
},
{
"docid": "7d603d154025f7160c0711bba92e1049",
"text": "Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences.",
"title": ""
},
{
"docid": "d131f4f22826a2083d35dfa96bf2206b",
"text": "The ranking of n objects based on pairwise comparisons is a core machine learning problem, arising in recommender systems, ad placement, player ranking, biological applications and others. In many practical situations the true pairwise comparisons cannot be actively measured, but a subset of all n(n−1)/2 comparisons is passively and noisily observed. Optimization algorithms (e.g., the SVM) could be used to predict a ranking with fixed expected Kendall tau distance, while achieving an Ω(n) lower bound on the corresponding sample complexity. However, due to their centralized structure they are difficult to extend to online or distributed settings. In this paper we show that much simpler algorithms can match the same Ω(n) lower bound in expectation. Furthermore, if an average of O(n log(n)) binary comparisons are measured, then one algorithm recovers the true ranking in a uniform sense, while the other predicts the ranking more accurately near the top than the bottom. We discuss extensions to online and distributed ranking, with benefits over traditional alternatives.",
"title": ""
},
{
"docid": "d59e21319b9915c2f6d7a8931af5503c",
"text": "The effect of directional antenna elements in uniform circular arrays (UCAs) for direction of arrival (DOA) estimation is studied in this paper. While the vast majority of previous work assumes isotropic antenna elements or omnidirectional dipoles, this work demonstrates that improved DOA estimation accuracy and increased bandwidth is achievable with appropriately-designed directional antennas. The Cramer-Rao Lower Bound (CRLB) is derived for UCAs with directional antennas and is compared to isotropic antennas for 4- and 8-element arrays using a theoretical radiation pattern. The directivity that minimizes the CRLB is identified and microstrip patch antennas approximating the optimal theoretical gain pattern are designed to compare the resulting DOA estimation accuracy with a UCA using dipole antenna elements. Simulation results show improved DOA estimation accuracy and robustness using microstrip patch antennas as opposed to conventional dipoles. Additionally, it is shown that the bandwidth of a UCA for DOA estimation is limited only by the broadband characteristics of the directional antenna elements and not by the electrical size of the array as is the case with omnidirectional antennas.",
"title": ""
},
{
"docid": "959a43b6b851a4a255466296efac7299",
"text": "Technology in football has been debated by pundits, players and fans all over the world for the past decade. FIFA has recently commissioned the use of ‘Hawk-Eye’ and ‘Goal Ref’ goal line technology systems at the 2014 World Cup in Brazil. This paper gives an in depth evaluation of the possible technologies that could be used in football and determines the potential benefits and implications these systems could have on the officiating of football matches. The use of technology in other sports is analyzed to come to a conclusion as to whether officiating technology should be used in football. Will football be damaged by the loss of controversial incidents such as Frank Lampard’s goal against Germany at the 2010 World Cup? Will cost, accuracy and speed continue to prevent the use of officiating technology in football? Time will tell, but for now, any advancement in the use of technology in football will be met by some with discontent, whilst others see it as moving the sport into the 21 century.",
"title": ""
},
{
"docid": "bc3924d12ee9d07a752fce80a67bb438",
"text": "Unsupervised semantic segmentation in the time series domain is a much-studied problem due to its potential to detect unexpected regularities and regimes in poorly understood data. However, the current techniques have several shortcomings, which have limited the adoption of time series semantic segmentation beyond academic settings for three primary reasons. First, most methods require setting/learning many parameters and thus may have problems generalizing to novel situations. Second, most methods implicitly assume that all the data is segmentable, and have difficulty when that assumption is unwarranted. Finally, most research efforts have been confined to the batch case, but online segmentation is clearly more useful and actionable. To address these issues, we present an algorithm which is domain agnostic, has only one easily determined parameter, and can handle data streaming at a high rate. In this context, we test our algorithm on the largest and most diverse collection of time series datasets ever considered, and demonstrate our algorithm's superiority over current solutions. Furthermore, we are the first to show that semantic segmentation may be possible at superhuman performance levels.",
"title": ""
},
{
"docid": "5b89c42eb7681aff070448bc22e501ea",
"text": "DISCOVER operates on relational databases and facilitates information discovery on them by allowing its user to issue keyword queries without any knowledge of the database schema or of SQL. DISCOVER returns qualified joining networks of tuples, that is, sets of tuples that are associated because they join on their primary and foreign keys and collectively contain all the keywords of the query. DISCOVER proceeds in two steps. First the Candidate Network Generator generates all candidate networks of relations, that is, join expressions that generate the joining networks of tuples. Then the Plan Generator builds plans for the efficient evaluation of the set of candidate networks, exploiting the opportunities to reuse common subexpressions of the candidate networks. We prove that DISCOVER finds without redundancy all relevant candidate networks, whose size can be data bound, by exploiting the structure of the schema. We prove that the selection of the optimal execution plan (way to reuse common subexpressions) is NP-complete. We provide a greedy algorithm and we show that it provides near-optimal plan execution time cost. Our experimentation also provides hints on tuning the greedy algorithm.",
"title": ""
},
{
"docid": "f0dbe9ad4934ff4d5857cfc5a876bcb6",
"text": "Although pricing fraud is an important issue for improving service quality of online shopping malls, research on automatic fraud detection has been limited. In this paper, we propose an unsupervised learning method based on a finite mixture model to identify pricing frauds. We consider two states, normal and fraud, for each item according to whether an item description is relevant to its price by utilizing the known number of item clusters. Two states of an observed item are modeled as hidden variables, and the proposed models estimate the state by using an expectation maximization (EM) algorithm. Subsequently, we suggest a special case of the proposed model, which is applicable when the number of item clusters is unknown. The experiment results show that the proposed models are more effective in identifying pricing frauds than the existing outlier detection methods. Furthermore, it is presented that utilizing the number of clusters is helpful in facilitating the improvement of pricing fraud detection",
"title": ""
},
{
"docid": "49e91d22adb0cdeb014b8330e31f226d",
"text": "Ghrelin increases non-REM sleep and decreases REM sleep in young men but does not affect sleep in young women. In both sexes, ghrelin stimulates the activity of the somatotropic and the hypothalamic-pituitary-adrenal (HPA) axis, as indicated by increased growth hormone (GH) and cortisol plasma levels. These two endocrine axes are crucially involved in sleep regulation. As various endocrine effects are age-dependent, aim was to study ghrelin's effect on sleep and secretion of GH and cortisol in elderly humans. Sleep-EEGs (2300-0700 h) and secretion profiles of GH and cortisol (2000-0700 h) were determined in 10 elderly men (64.0+/-2.2 years) and 10 elderly, postmenopausal women (63.0+/-2.9 years) twice, receiving 50 microg ghrelin or placebo at 2200, 2300, 0000, and 0100 h, in this single-blind, randomized, cross-over study. In men, ghrelin compared to placebo was associated with significantly more stage 2 sleep (placebo: 183.3+/-6.1; ghrelin: 221.0+/-12.2 min), slow wave sleep (placebo: 33.4+/-5.1; ghrelin: 44.3+/-7.7 min) and non-REM sleep (placebo: 272.6+/-12.8; ghrelin: 318.2+/-11.0 min). Stage 1 sleep (placebo: 56.9+/-8.7; ghrelin: 50.9+/-7.6 min) and REM sleep (placebo: 71.9+/-9.1; ghrelin: 52.5+/-5.9 min) were significantly reduced. Furthermore, delta power in men was significantly higher and alpha power and beta power were significantly lower after ghrelin than after placebo injection during the first half of night. In women, no effects on sleep were observed. In both sexes, ghrelin caused comparable increases and secretion patterns of GH and cortisol. In conclusion, ghrelin affects sleep in elderly men but not women resembling findings in young subjects.",
"title": ""
},
{
"docid": "9f5ab2f666eb801d4839fcf8f0293ceb",
"text": "In recent years, Wireless Sensor Networks (WSNs) have emerged as a new powerful technology used in many applications such as military operations, surveillance system, Intelligent Transport Systems (ITS) etc. These networks consist of many Sensor Nodes (SNs), which are not only used for monitoring but also capturing the required data from the environment. Most of the research proposals on WSNs have been developed keeping in view of minimization of energy during the process of extracting the essential data from the environment where SNs are deployed. The primary reason for this is the fact that the SNs are operated on battery which discharges quickly after each operation. It has been found in literature that clustering is the most common technique used for energy aware routing in WSNs. The most popular protocol for clustering in WSNs is Low Energy Adaptive Clustering Hierarchy (LEACH) which is based on adaptive clustering technique. This paper provides the taxonomy of various clustering and routing techniques in WSNs based upon metrics such as power management, energy management, network lifetime, optimal cluster head selection, multihop data transmission etc. A comprehensive discussion is provided in the text highlighting the relative advantages and disadvantages of many of the prominent proposals in this category which helps the designers to select a particular proposal based upon its merits over the others. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a7e6a2145b9ae7ca2801a3df01f42f5e",
"text": "The aim of this systematic review was to compare the clinical performance and failure modes of teeth restored with intra-radicular retainers. A search was performed on PubMed/Medline, Central and ClinicalTrials databases for randomized clinical trials comparing clinical behavior and failures of at least two types of retainers. From 341 detected papers, 16 were selected for full-text analysis, of which 9 met the eligibility criteria. A manual search added 2 more studies, totalizing 11 studies that were included in this review. Evaluated retainers were fiber (prefabricated and customized) and metal (prefabricated and cast) posts, and follow-up ranged from 6 months to 10 years. Most studies showed good clinical behavior for evaluated intra-radicular retainers. Reported survival rates varied from 71 to 100% for fiber posts and 50 to 97.1% for metal posts. Studies found no difference in the survival among different metal posts and most studies found no difference between fiber and metal posts. Two studies also showed that remaining dentine height, number of walls and ferrule increased the longevity of the restored teeth. Failures of fiber posts were mainly due to post loss of retention, while metal post failures were mostly related to root fracture, post fracture and crown and/or post loss of retention. In conclusion, metal and fiber posts present similar clinical behavior at short to medium term follow-up. Remaining dental structure and ferrule increase the survival of restored pulpless teeth. Studies with longer follow-up are needed.",
"title": ""
},
{
"docid": "c3539090bef61fcfe4a194058a61d381",
"text": "Real-time environment monitoring and analysis is an important research area of Internet of Things (IoT). Understanding the behavior of the complex ecosystem requires analysis of detailed observations of an environment over a range of different conditions. One such example in urban areas includes the study of tree canopy cover over the microclimate environment using heterogeneous sensor data. There are several challenges that need to be addressed, such as obtaining reliable and detailed observations over monitoring area, detecting unusual events from data, and visualizing events in real-time in a way that is easily understandable by the end users (e.g., city councils). In this regard, we propose an integrated geovisualization framework, built for real-time wireless sensor network data on the synergy of computational intelligence and visual methods, to analyze complex patterns of urban microclimate. A Bayesian maximum entropy-based method and a hyperellipsoidal model-based algorithm have been build in our integrated framework to address above challenges. The proposed integrated framework was verified using the dataset from an indoor and two outdoor network of IoT devices deployed at two strategically selected locations in Melbourne, Australia. The data from these deployments are used for evaluation and demonstration of these components’ functionality along with the designed interactive visualization components.",
"title": ""
},
{
"docid": "e2134985f8067efe41935adff8ef2150",
"text": "In this paper, a high efficiency and high power factor single-stage balanced forward-flyback converter merging a foward and flyback converter topologies is proposed. The conventional AC/DC flyback converter can achieve a good power factor but it has a high offset current through the transformer magnetizing inductor, which results in a large core loss and low power conversion efficiency. And, the conventional forward converter can achieve the good power conversion efficiency with the aid of the low core loss but the input current dead zone near zero cross AC input voltage deteriorates the power factor. On the other hand, since the proposed converter can operate as the forward and flyback converters during switch on and off periods, respectively, it cannot only perform the power transfer during an entire switching period but also achieve the high power factor due to the flyback operation. Moreover, since the current balanced capacitor can minimize the offset current through the transformer magnetizing inductor regardless of the AC input voltage, the core loss and volume of the transformer can be minimized. Therefore, the proposed converter features a high efficiency and high power factor. To confirm the validity of the proposed converter, theoretical analysis and experimental results from a prototype of 24W LED driver are presented.",
"title": ""
},
{
"docid": "956be237e0b6e7bafbf774d56a8841d2",
"text": "Wireless sensor networks (WSNs) will play an active role in the 21th Century Healthcare IT to reduce the healthcare cost and improve the quality of care. The protection of data confidentiality and patient privacy are the most critical requirements for the ubiquitous use of WSNs in healthcare environments. This requires a secure and lightweight user authentication and access control. Symmetric key based access control is not suitable for WSNs in healthcare due to dynamic network topology, mobility, and stringent resource constraints. In this paper, we propose a secure, lightweight public key based security scheme, Mutual Authentication and Access Control based on Elliptic curve cryptography (MAACE). MAACE is a mutual authentication protocol where a healthcare professional can authenticate to an accessed node (a PDA or medical sensor) and vice versa. This is to ensure that medical data is not exposed to an unauthorized person. On the other hand, it ensures that medical data sent to healthcare professionals did not originate from a malicious node. MAACE is more scalable and requires less memory compared to symmetric key-based schemes. Furthermore, it is much more lightweight than other public key-based schemes. Security analysis and performance evaluation results are presented and compared to existing schemes to show advantages of the proposed scheme.",
"title": ""
},
{
"docid": "8b641a8f504b550e1eed0dca54bfbe04",
"text": "Overlay architectures are programmable logic systems that are compiled on top of a traditional FPGA. These architectures give designers flexibility, and have a number of benefits, such as being designed or optimized for specific application domains, making it easier or more efficient to implement solutions, being independent of platform, allowing the ability to do partial reconfiguration regardless of the underlying architecture, and allowing compilation without using vendor tools, in some cases with fully open source tool chains. This thesis describes the implementation of two FPGA overlay architectures, ZUMA and CARBON. These overlay implementations include optimizations to reduce area and increase speed which may be applicable to many other FPGAs and also ASIC systems. ZUMA is a fine-grain overlay which resembles a modern commercial FPGA, and is compatible with the VTR open source compilation tools. The implementation includes a number of novel features tailored to efficient FPGA implementation, including the utilization of reprogrammable LUTRAMs, a novel two-stage local routing crossbar, and an area efficient configuration controller. CARBON",
"title": ""
},
{
"docid": "2efb10a430e001acd201a0b16ab74836",
"text": "As the cost of human full genome sequencing continues to fall, we will soon witness a prodigious amount of human genomic data in the public cloud. To protect the confidentiality of the genetic information of individuals, the data has to be encrypted at rest. On the other hand, encryption severely hinders the use of this valuable information, such as Genome-wide Range Query (GRQ), in medical/genomic research. While the problem of secure range query on outsourced encrypted data has been extensively studied, the current schemes are far from practical deployment in terms of efficiency and scalability due to the data volume in human genome sequencing. In this paper, we investigate the problem of secure GRQ over human raw aligned genomic data in a third-party outsourcing model. Our solution contains a novel secure range query scheme based on multi-keyword symmetric searchable encryption (MSSE). The proposed scheme incurs minimal ciphertext expansion and computation overhead. We also present a hierarchical GRQ-oriented secure index structure tailored for efficient and large-scale genomic data lookup in the cloud while preserving the query privacy. Our experiment on real human genomic data shows that a secure GRQ request with range size 100,000 over more than 300 million encrypted short reads takes less than 3 minutes, which is orders of magnitude faster than existing solutions.",
"title": ""
},
{
"docid": "b7617b5dd2a6f392f282f6a34f5b6751",
"text": "In the semiconductor market, the trend of packaging for die stacking technology moves to high density with thinner chips and higher capacity of memory devices. Moreover, the wafer sawing process is becoming more important for thin wafer, because its process speed tends to affect sawn quality and yield. ULK (Ultra low-k) device could require laser grooving application to reduce the stress during wafer sawing. Furthermore under 75um-thick thin low-k wafer is not easy to use the laser grooving application. So, UV laser dicing technology that is very useful tool for Si wafer was selected as full cut application, which has been being used on low-k wafer as laser grooving method.",
"title": ""
},
{
"docid": "c16ff028e77459867eed4c2b9c1f44c6",
"text": "Neuroimage analysis usually involves learning thousands or even millions of variables using only a limited number of samples. In this regard, sparse models, e.g. the lasso, are applied to select the optimal features and achieve high diagnosis accuracy. The lasso, however, usually results in independent unstable features. Stability, a manifest of reproducibility of statistical results subject to reasonable perturbations to data and the model (Yu 2013), is an important focus in statistics, especially in the analysis of high dimensional data. In this paper, we explore a nonnegative generalized fused lasso model for stable feature selection in the diagnosis of Alzheimer’s disease. In addition to sparsity, our model incorporates two important pathological priors: the spatial cohesion of lesion voxels and the positive correlation between the features and the disease labels. To optimize the model, we propose an efficient algorithm by proving a novel link between total variation and fast network flow algorithms via conic duality. Experiments show that the proposed nonnegative model performs much better in exploring the intrinsic structure of data via selecting stable features compared with other state-of-the-arts. Introduction Neuroimage analysis is challenging due to its high feature dimensionality and data scarcity. Sparse models such as the lasso (Tibshirani 1996) have gained great reputation in statistics and machine learning, and they have been applied to the analysis of such high dimensional data by exploiting the sparsity property in the absence of abundant data. As a major result, automatic selection of relevant variables/features by such sparse formulation achieves promising performance. For example, in (Liu, Zhang, and Shen 2012), the lasso model was applied to the diagnosis of Alzheimer’s disease (AD) and showed better performance than the support vector machine (SVM), which is one of the state-of-the-arts in brain image classification. However, in statistics, it is known that the lasso does not always provide interpretable results because of its instability (Yu 2013). “Stability” here means the reproducibility of statistical results subject to reasonable perturbations to data and Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. the model. (These perturbations include the often used Jacknife, bootstrap and cross-validation.) This unstable behavior of the lasso model is critical in high dimensional data analysis. The resulting irreproducibility of the feature selection are especially undesirable in neuroimage analysis/diagnosis. However, unlike the problems such as registration and classification, the stability issue of feature selection is much less studied in this field. In this paper we propose a model to induce more stable feature selection from high dimensional brain structural Magnetic Resonance Imaging (sMRI) images. Besides sparsity, the proposed model harnesses two important additional pathological priors in brain sMRI: (i) the spatial cohesion of lesion voxels (via inducing fusion terms) and (ii) the positive correlation between the features and the disease labels. The correlation prior is based on the observation that in many brain image analysis problems (such as AD, frontotemporal dementia, corticobasal degeneration, etc), there exist strong correlations between the features and the labels. For example, gray matter of AD is degenerated/atrophied. Therefore, the gray matter values (indicating the volume) are positively correlated with the cognitive scores or disease labels {-1,1}. That is, the less gray matter, the lower the cognitive score. Accordingly, we propose nonnegative constraints on the variables to enforce the prior and name the model as “non-negative Generalized Fused Lasso” (nGFL). It extends the popular generalized fused lasso and enables it to explore the intrinsic structure of data via selecting stable features. To measure feature stability, we introduce the “Estimation Stability” recently proposed in (Yu 2013) and the (multi-set) Dice coefficient (Dice 1945). Experiments demonstrate that compared with existing models, our model selects much more stable (and pathological-prior consistent) voxels. It is worth mentioning that the non-negativeness per se is a very important prior of many practical problems, e.g. (Lee and Seung 1999). Although nGFL is proposed to solve the diagnosis of AD in this work, the model can be applied to more general problems. Incorporating these priors makes the problem novel w.r.t the lasso or generalized fused lasso from an optimization standpoint. Although off-the-shelf convex solvers such as CVX (Grant and Boyd 2013) can be applied to solve the optimization, it hardly scales to high-dimensional problems in feasible time. In this regard, we propose an efficient algoProceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence",
"title": ""
}
] |
scidocsrr
|
2a1200ca1f84e5678ea71c7497f575a3
|
Validating the independent components of neuroimaging time series via clustering and visualization
|
[
{
"docid": "4daa16553442aa424a1578f02f044c6e",
"text": "Cluster structure of gene expression data obtained from DNA microarrays is analyzed and visualized with the Self-Organizing Map (SOM) algorithm. The SOM forms a non-linear mapping of the data to a two-dimensional map grid that can be used as an exploratory data analysis tool for generating hypotheses on the relationships, and ultimately of the function of the genes. Similarity relationships within the data and cluster structures can be visualized and interpreted. The methods are demonstrated by computing a SOM of yeast genes. The relationships of known functional classes of genes are investigated by analyzing their distribution on the SOM, the cluster structure is visualized by the U-matrix method, and the clusters are characterized in terms of the properties of the expression profiles of the genes. Finally, it is shown that the SOM visualizes the similarity of genes in a more trustworthy way than two alternative methods, multidimensional scaling and hierarchical clustering.",
"title": ""
}
] |
[
{
"docid": "e4405c71336ea13ccbd43aa84651dc60",
"text": "Nurses are often asked to think about leadership, particularly in times of rapid change in healthcare, and where questions have been raised about whether leaders and managers have adequate insight into the requirements of care. This article discusses several leadership styles relevant to contemporary healthcare and nursing practice. Nurses who are aware of leadership styles may find this knowledge useful in maintaining a cohesive working environment. Leadership knowledge and skills can be improved through training, where, rather than having to undertake formal leadership roles without adequate preparation, nurses are able to learn, nurture, model and develop effective leadership behaviours, ultimately improving nursing staff retention and enhancing the delivery of safe and effective care.",
"title": ""
},
{
"docid": "ba391ddf37a4757bc9b8d9f4465a66dc",
"text": "Adverse childhood experiences (ACEs) have been linked with risky health behaviors and the development of chronic diseases in adulthood. This study examined associations between ACEs, chronic diseases, and risky behaviors in adults living in Riyadh, Saudi Arabia in 2012 using the ACE International Questionnaire (ACE-IQ). A cross-sectional design was used, and adults who were at least 18 years of age were eligible to participate. ACEs event scores were measured for neglect, household dysfunction, abuse (physical, sexual, and emotional), and peer and community violence. The ACE-IQ was supplemented with questions on risky health behaviors, chronic diseases, and mood. A total of 931 subjects completed the questionnaire (a completion rate of 88%); 57% of the sample was female, 90% was younger than 45 years, 86% had at least a college education, 80% were Saudi nationals, and 58% were married. One-third of the participants (32%) had been exposed to 4 or more ACEs, and 10%, 17%, and 23% had been exposed to 3, 2, or 1 ACEs respectively. Only 18% did not have an ACE. The prevalence of risky health behaviors ranged between 4% and 22%. The prevalence of self-reported chronic diseases ranged between 6% and 17%. Being exposed to 4 or more ACEs increased the risk of having chronic diseases by 2-11 fold, and increased risky health behaviors by 8-21 fold. The findings of this study will contribute to the planning and development of programs to prevent child maltreatment and to alleviate the burden of chronic diseases in adults.",
"title": ""
},
{
"docid": "4f3d2b869322125a8fad8a39726c99f8",
"text": "Routing Protocol for Low Power and Lossy Networks (RPL) is the routing protocol for IoT and Wireless Sensor Networks. RPL is a lightweight protocol, having good routing functionality, but has basic security functionality. This may make RPL vulnerable to various attacks. Providing security to IoT networks is challenging, due to their constrained nature and connectivity to the unsecured internet. This survey presents the elaborated review on the security of Routing Protocol for Low Power and Lossy Networks (RPL). This survey is built upon the previous work on RPL security and adapts to the security issues and constraints specific to Internet of Things. An approach to classifying RPL attacks is made based on Confidentiality, Integrity, and Availability. Along with that, we surveyed existing solutions to attacks which are evaluated and given possible solutions (theoretically, from various literature) to the attacks which are not yet evaluated. We further conclude with open research challenges and future work needs to be done in order to secure RPL for Internet of Things (IoT).",
"title": ""
},
{
"docid": "878292ad8dfbe9118c64a14081da561a",
"text": "Public-key cryptography is indispensable for cyber security. However, as a result of Peter Shor shows, the public-key schemes that are being used today will become insecure once quantum computers reach maturity. This paper gives an overview of the alternative public-key schemes that have the capability to resist quantum computer attacks and compares them.",
"title": ""
},
{
"docid": "c8a9f16259e437cda8914a92148901ab",
"text": "A distinguishing property of human intelligence is the ability to flexibly use language in order to communicate complex ideas with other humans in a variety of contexts. Research in natural language dialogue should focus on designing communicative agents which can integrate themselves into these contexts and productively collaborate with humans. In this abstract, we propose a general situated language learning paradigm which is designed to bring about robust language agents able to cooperate productively with humans. This dialogue paradigm is built on a utilitarian definition of language understanding. Language is one of multiple tools which an agent may use to accomplish goals in its environment. We say an agent “understands” language only when it is able to use language productively to accomplish these goals. Under this definition, an agent’s communication success reduces to its success on tasks within its environment. This setup contrasts with many conventional natural language tasks, which maximize linguistic objectives derived from static datasets. Such applications often make the mistake of reifying language as an end in itself. The tasks prioritize an isolated measure of linguistic intelligence (often one of linguistic competence, in the sense of Chomsky (1965)), rather than measuring a model’s effectiveness in real-world scenarios. Our utilitarian definition is motivated by recent successes in reinforcement learning methods. In a reinforcement learning setting, agents maximize success metrics on real-world tasks, without requiring direct supervision of linguistic behavior.",
"title": ""
},
{
"docid": "49b71624485d8b133b8ebb0cc72d2540",
"text": "Web browsers show HTTPS authentication warnings (i.e., SSL warnings) when the integrity and confidentiality of users' interactions with websites are at risk. Our goal in this work is to decrease the number of users who click through the Google Chrome SSL warning. Prior research showed that the Mozilla Firefox SSL warning has a much lower click-through rate (CTR) than Chrome. We investigate several factors that could be responsible: the use of imagery, extra steps before the user can proceed, and style choices. To test these factors, we ran six experimental SSL warnings in Google Chrome 29 and measured 130,754 impressions.",
"title": ""
},
{
"docid": "3bd2941e72695b3214247c8c7071410b",
"text": "The paper contributes to the emerging literature linking sustainability as a concept to problems researched in HRM literature. Sustainability is often equated with social responsibility. However, emphasizing mainly moral or ethical values neglects that sustainability can also be economically rational. This conceptual paper discusses how the notion of sustainability has developed and emerged in HRM literature. A typology of sustainability concepts in HRM is presented to advance theorizing in the field of Sustainable HRM. The concepts of paradox, duality, and dilemma are reviewed to contribute to understanding the emergence of sustainability in HRM. It is argued in this paper that sustainability can be applied as a concept to cope with the tensions of shortvs. long-term HRM and to make sense of paradoxes, dualities, and dilemmas. Furthermore, it is emphasized that the dualities cannot be reconciled when sustainability is interpreted in a way that leads to ignorance of one of the values or logics. Implications for further research and modest suggestions for managerial practice are derived.",
"title": ""
},
{
"docid": "acf6361be5bb883153bebd4c3ec032c2",
"text": "The first objective of the paper is to identifiy a number of issues related to crowdfunding that are worth studying from an industrial organization (IO) perspective. To this end, we first propose a definition of crowdfunding; next, on the basis on a previous empirical study, we isolate what we believe are the main features of crowdfunding; finally, we point to a number of strands of the literature that could be used to study the various features of crowdfunding. The second objective of the paper is to propose some preliminary efforts towards the modelization of crowdfunding. In a first model, we associate crowdfunding with pre-ordering and price discrimination, and we study the conditions under which crowdfunding is preferred to traditional forms of external funding. In a second model, we see crowdfunding as a way to make a product better known by the consumers and we give some theoretical underpinning for the empirical finding that non-profit organizations tend to be more successful in using crowdfunding. JEL classification codes: G32, L11, L13, L15, L21, L31",
"title": ""
},
{
"docid": "7ae8e78059bb710c46b66c120e1ff0ef",
"text": "Biometrics technology is keep growing substantially in the last decades with great advances in biometric applications. An accurate personal authentication or identification has become a critical step in a wide range of applications such as national ID, electronic commerce, and automated and remote banking. The recent developments in the biometrics area have led to smaller, faster, and cheaper systems such as mobile device systems. As a kind of human biometrics for personal identification, fingerprint is the dominant trait due to its simplicity to be captured, processed, and extracted without violating user privacy. In a wide range of applications of fingerprint recognition, including civilian and forensics implementations, a large amount of fingerprints are collected and stored everyday for different purposes. In Automatic Fingerprint Identification System (AFIS) with a large database, the input image is matched with all fields inside the database to identify the most potential identity. Although satisfactory performances have been reported for fingerprint authentication (1:1 matching), both time efficiency and matching accuracy deteriorate seriously by simple extension of a 1:1 authentication procedure to a 1:N identification system (Manhua, 2010). The system response time is the key issue of any AFIS, and it is often improved by controlling the accuracy of the identification to satisfy the system requirement. In addition to developing new technologies, it is necessary to make clear the trade-off between the response time and the accuracy in fingerprint identification systems. Moreover, from the versatility and developing cost points of view, the trade-off should be realized in terms of system design, implementation, and usability. Fingerprint classification is one of the standard approaches to speed up the matching process between the input sample and the collected database (K. Jain et al., 2007). Fingerprint classification is considered as indispensable step toward reducing the search time through large fingerprint databases. It refers to the problem of assigning fingerprint to one of several pre-specified classes, and it presents an interesting problem in pattern recognition, especially in the real and time sensitive applications that require small response time. Fingerprint classification process works on narrowing down the search domain into smaller database subsets, and hence speeds up the total response time of any AFIS. Even for",
"title": ""
},
{
"docid": "20acbae6f76e3591c8b696481baffc90",
"text": "A long-standing challenge in coreference resolution has been the incorporation of entity-level information – features defined over clusters of mentions instead of mention pairs. We present a neural network based coreference system that produces high-dimensional vector representations for pairs of coreference clusters. Using these representations, our system learns when combining clusters is desirable. We train the system with a learning-to-search algorithm that teaches it which local decisions (cluster merges) will lead to a high-scoring final coreference partition. The system substantially outperforms the current state-of-the-art on the English and Chinese portions of the CoNLL 2012 Shared Task dataset despite using few hand-engineered features.",
"title": ""
},
{
"docid": "67e35bc7add5d6482fff4cd4f2060e6b",
"text": "There is a clear trend in the automotive industry to use more electrical systems in order to satisfy the ever-growing vehicular load demands. Thus, it is imperative that automotive electrical power systems will obviously undergo a drastic change in the next 10-20 years. Currently, the situation in the automotive industry is such that the demands for higher fuel economy and more electric power are driving advanced vehicular power system voltages to higher levels. For example, the projected increase in total power demand is estimated to be about three to four times that of the current value. This means that the total future power demand of a typical advanced vehicle could roughly reach a value as high as 10 kW. In order to satisfy this huge vehicular load, the approach is to integrate power electronics intensive solutions within advanced vehicular power systems. In view of this fact, this paper aims at reviewing the present situation as well as projected future research and development work of advanced vehicular electrical power systems including those of electric, hybrid electric, and fuel cell vehicles (EVs, HEVs, and FCVs). The paper will first introduce the proposed power system architectures for HEVs and FCVs and will then go on to exhaustively discuss the specific applications of dc/dc and dc/ac power electronic converters in advanced automotive power systems",
"title": ""
},
{
"docid": "1315247aa0384097f5f9e486bce09bd4",
"text": "We give an overview of the scripting languages used in existing cryptocurrencies, and in particular we review in some detail the scripting languages of Bitcoin, Nxt and Ethereum, in the context of a high-level overview of Distributed Ledger Technology and cryptocurrencies. We survey different approaches, and give an overview of critiques of existing languages. We also cover technologies that might be used to underpin extensions and innovations in scripting and contracts, including technologies for verification, such as zero knowledge proofs, proof-carrying code and static analysis, as well as approaches to making systems more efficient, e.g. Merkelized Abstract Syntax Trees.",
"title": ""
},
{
"docid": "5c6c8834304264b59db5f28021ccbdb8",
"text": "We develop a dynamic optimal control model of a fashion designers challenge of maintaining brand image in the face of short-term pro
t opportunities through expanded sales that risk brand dilution in the longer-run. The key state variable is the brands reputation, and the key decision is sales volume. Depending on the brands capacity to command higher prices, one of two regimes is observed. If the price mark-ups relative to production costs are modest, then the optimal solution may simply be to exploit whatever value can be derived from the brand in the short-run and retire the brand when that capacity is fully diluted. However, if the price markups are more substantial, then an existing brand should be preserved.",
"title": ""
},
{
"docid": "ce8729f088aaf9f656c9206fc67ff4bd",
"text": "Traditional passive radar detectors compute cross correlation of the raw data in the reference and surveillance channels. However, there is no optimality guarantee for this detector in the presence of a noisy reference. Here, we develop a new detector that utilizes a test statistic based on the cross correlation of the principal left singular vectors of the reference and surveillance signal-plus-noise matrices. This detector offers better performance by exploiting the inherent low-rank structure when the transmitted signals are a weighted periodic summation of several identical waveforms (amplitude and phase modulation), as is the case with commercial digital illuminators as well as noncooperative radar. We consider a scintillating target. We provide analytical detection performance guarantees establishing signal-to-noise ratio thresholds above which the proposed detection statistic reliably discriminates, in an asymptotic sense, the signal versus no-signal hypothesis. We validate these results using extensive numerical simulations. We demonstrate the “near constant false alarm rate (CFAR)” behavior of the proposed detector with respect to a fixed, SNR-independent threshold and contrast that with the need to adjust the detection threshold in an SNR-dependent manner to maintain CFAR for other detectors found in the literature. Extensions of the proposed detector for settings applicable to orthogonal frequency division multiplexing (OFDM), adaptive radar are discussed.",
"title": ""
},
{
"docid": "ac7b607cc261654939868a62822a58eb",
"text": "Interdigitated capacitors (IDC) are extensively used for a variety of chemical and biological sensing applications. Printing and functionalizing these IDC sensors on bendable substrates will lead to new innovations in healthcare and medicine, food safety inspection, environmental monitoring, and public security. The synthesis of an electrically conductive aqueous graphene ink stabilized in deionized water using the polymer Carboxymethyl Cellulose (CMC) is introduced in this paper. CMC is a nontoxic hydrophilic cellulose derivative used in food industry. The water-based graphene ink is then used to fabricate IDC sensors on mechanically flexible polyimide substrates. The capacitance and frequency response of the sensors are analyzed, and the effect of mechanical stress on the electrical properties is examined. Experimental results confirm low thin film resistivity (~6;.6×10-3 Ω-cm) and high capacitance (>100 pF). The printed sensors are then used to measure water content of ethanol solutions to demonstrate the proposed conductive ink and fabrication methodology for creating chemical sensors on thin membranes.",
"title": ""
},
{
"docid": "353d6ed75f2a4bca5befb5fdbcea2bcc",
"text": "BACKGROUND\nThe number of mental health apps (MHapps) developed and now available to smartphone users has increased in recent years. MHapps and other technology-based solutions have the potential to play an important part in the future of mental health care; however, there is no single guide for the development of evidence-based MHapps. Many currently available MHapps lack features that would greatly improve their functionality, or include features that are not optimized. Furthermore, MHapp developers rarely conduct or publish trial-based experimental validation of their apps. Indeed, a previous systematic review revealed a complete lack of trial-based evidence for many of the hundreds of MHapps available.\n\n\nOBJECTIVE\nTo guide future MHapp development, a set of clear, practical, evidence-based recommendations is presented for MHapp developers to create better, more rigorous apps.\n\n\nMETHODS\nA literature review was conducted, scrutinizing research across diverse fields, including mental health interventions, preventative health, mobile health, and mobile app design.\n\n\nRESULTS\nSixteen recommendations were formulated. Evidence for each recommendation is discussed, and guidance on how these recommendations might be integrated into the overall design of an MHapp is offered. Each recommendation is rated on the basis of the strength of associated evidence. It is important to design an MHapp using a behavioral plan and interactive framework that encourages the user to engage with the app; thus, it may not be possible to incorporate all 16 recommendations into a single MHapp.\n\n\nCONCLUSIONS\nRandomized controlled trials are required to validate future MHapps and the principles upon which they are designed, and to further investigate the recommendations presented in this review. Effective MHapps are required to help prevent mental health problems and to ease the burden on health systems.",
"title": ""
},
{
"docid": "604b29b9e748acf2776155074fee59d8",
"text": "The growing importance of the service sector in almost every economy in the world has created a significant amount of interest in service operations. In practice, many service sectors have sought and made use of various enhancement programs to improve their operations and performance in an attempt to hold competitive success. As most researchers recognize, service operations link with customers. The customers as participants act in the service operations system driven by the goal of sufficing his/her added values. This is one of the distinctive features of service production and consumption. In the paper, first, we propose the idea of service operations improvement by mapping objectively the service experience of customers from the view of customer journey. Second, a portraying scheme of service experience of customers based on the IDEF3 technique is proposed, and last, some implications on service operations improvement are",
"title": ""
},
{
"docid": "759b5a86bc70147842a106cf20b3a0cd",
"text": "This article reviews recent advances in convex optimization algorithms for big data, which aim to reduce the computational, storage, and communications bottlenecks. We provide an overview of this emerging field, describe contemporary approximation techniques such as first-order methods and randomization for scalability, and survey the important role of parallel and distributed computation. The new big data algorithms are based on surprisingly simple principles and attain staggering accelerations even on classical problems.",
"title": ""
},
{
"docid": "107c839a73c12606d4106af7dc04cd96",
"text": "This study presents a novel four-fingered robotic hand to attain a soft contact and high stability under disturbances while holding an object. Each finger is constructed using a tendon-driven skeleton, granular materials corresponding to finger pulp, and a deformable rubber skin. This structure provides soft contact with an object, as well as high adaptation to its shape. Even if the object is deformable and fragile, a grasping posture can be formed without deforming the object. If the air around the granular materials in the rubber skin and jamming transition is vacuumed, the grasping posture can be fixed and the object can be grasped firmly and stably. A high grasping stability under disturbances can be attained. Additionally, the fingertips can work as a small jamming gripper to grasp an object smaller than a fingertip. An experimental investigation indicated that the proposed structure provides a high grasping force with a jamming transition with high adaptability to the object's shape.",
"title": ""
},
{
"docid": "51bcd4099caaebc0e789d4c546e0782b",
"text": "When speaking and reasoning about time, people around the world tend to do so with vocabulary and concepts borrowed from the domain of space. This raises the question of whether the cross-linguistic variability found for spatial representations, and the principles on which these are based, may also carry over to the domain of time. Real progress in addressing this question presupposes a taxonomy for the possible conceptualizations in one domain and its consistent and comprehensive mapping onto the other-a challenge that has been taken up only recently and is far from reaching consensus. This article aims at systematizing the theoretical and empirical advances in this field, with a focus on accounts that deal with frames of reference (FoRs). It reviews eight such accounts by identifying their conceptual ingredients and principles for space-time mapping, and it explores the potential for their integration. To evaluate their feasibility, data from some thirty empirical studies, conducted with speakers of sixteen different languages, are then scrutinized. This includes a critical assessment of the methods employed, a summary of the findings for each language group, and a (re-)analysis of the data in view of the theoretical questions. The discussion relates these findings to research on the mental time line, and explores the psychological reality of temporal FoRs, the degree of cross-domain consistency in FoR adoption, the role of deixis, and the sources and extent of space-time mapping more generally.",
"title": ""
}
] |
scidocsrr
|
be7390a0d3790cb17b54f1b8b45dae52
|
Terminology Extraction: An Analysis of Linguistic and Statistical Approaches
|
[
{
"docid": "8a043a1ac74da0ec0cd55d1c8b658666",
"text": "Most recent research in trainable part of speech taggers has explored stochastic tagging. While these taggers obtain high accuracy, linguistic information is captured indirectly, typically in tens of thousands of lexical and contextual probabilities. In (Brill 1992), a trainable rule-based tagger was described that obtained performance comparable to that of stochastic taggers, but captured relevant linguistic information in a small number of simple non-stochastic rules. In this paper, we describe a number of extensions to this rule-based tagger. First, we describe a method for expressing lexical relations in tagging that stochastic taggers are currently unable to express. Next, we show a rule-based approach to tagging unknown words. Finally, we show how the tagger can be extended into a k-best tagger, where multiple tags can be assigned to words in some cases of uncertainty.",
"title": ""
}
] |
[
{
"docid": "b537af893b84a4c41edb829d45190659",
"text": "We seek a complete description for the neurome of the Drosophila, which involves tracing more than 20,000 neurons. The currently available tracings are sensitive to background clutter and poor contrast of the images. In this paper, we present Tree2Tree2, an automatic neuron tracing algorithm to segment neurons from 3D confocal microscopy images. Building on our previous work in segmentation [1], this method uses an adaptive initial segmentation to detect the neuronal portions, as opposed to a global strategy that often results in under segmentation. In order to connect the disjoint portions, we use a technique called Path Search, which is based on a shortest path approach. An intelligent pruning step is also implemented to delete undesired branches. Tested on 3D confocal microscopy images of GFP labeled Drosophila neurons, the visual and quantitative results suggest that Tree2Tree2 is successful in automatically segmenting neurons in images plagued by background clutter and filament discontinuities.",
"title": ""
},
{
"docid": "de71bef095a0ef7fb4fb1b10d4136615",
"text": "Active learning—a class of algorithms that iteratively searches for the most informative samples to include in a training dataset—has been shown to be effective at annotating data for image classification. However, the use of active learning for object detection is still largely unexplored as determining informativeness of an object-location hypothesis is more difficult. In this paper, we address this issue and present two metrics for measuring the informativeness of an object hypothesis, which allow us to leverage active learning to reduce the amount of annotated data needed to achieve a target object detection performance. Our first metric measures “localization tightness” of an object hypothesis, which is based on the overlapping ratio between the region proposal and the final prediction. Our second metric measures “localization stability” of an object hypothesis, which is based on the variation of predicted object locations when input images are corrupted by noise. Our experimental results show that by augmenting a conventional active-learning algorithm designed for classification with the proposed metrics, the amount of labeled training data required can be reduced up to 25%. Moreover, on PASCAL 2007 and 2012 datasets our localization-stability method has an average relative improvement of 96.5% and 81.9% over the base-line method using classification only. Asian Conference on Computer Vision This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2018 201 Broadway, Cambridge, Massachusetts 02139 Localization-Aware Active Learning for Object",
"title": ""
},
{
"docid": "1a8e9b74d4c1a32299ca08e69078c70c",
"text": "Semantic Textual Similarity (STS) measures the degree of semantic equivalence between two segments of text, even though the similar context is expressed using different words. The textual segments are word phrases, sentences, paragraphs or documents. The similarity can be measured using lexical, syntactic and semantic information embedded in the sentences. The STS task in SemEval workshop is viewed as a regression problem, where real-valued output is clipped to the range 0-5 on a sentence pair. In this paper, empirical evaluations are carried using lexical, syntactic and semantic features on STS 2016 dataset. A new syntactic feature, Phrase Entity Alignment (PEA) is proposed. A phrase entity is a conceptual unit in a sentence with a subject or an object and its describing words. PEA aligns phrase entities present in the sentences based on their similarity scores. STS score is measured by combing the similarity scores of all aligned phrase entities. The impact of PEA on semantic textual equivalence is depicted using Pearson correlation between system generated scores and the human annotations. The proposed system attains a mean score of 0.7454 using random forest regression model. The results indicate that the system using the lexical, syntactic and semantic features together with PEA feature perform comparably better than existing systems.",
"title": ""
},
{
"docid": "9c97262605b3505bbc33c64ff64cfcd5",
"text": "This essay focuses on possible nonhuman applications of CRISPR/Cas9 that are likely to be widely overlooked because they are unexpected and, in some cases, perhaps even \"frivolous.\" We look at five uses for \"CRISPR Critters\": wild de-extinction, domestic de-extinction, personal whim, art, and novel forms of disease prevention. We then discuss the current regulatory framework and its possible limitations in those contexts. We end with questions about some deeper issues raised by the increased human control over life on earth offered by genome editing.",
"title": ""
},
{
"docid": "df8f22d84cc8f6f38de90c2798889051",
"text": "The large amount of videos popping up every day, make it is more and more critical that key information within videos can be extracted and understood in a very short time. Video summarization, the task of finding the smallest subset of frames, which still conveys the whole story of a given video, is thus of great significance to improve efficiency of video understanding. In this paper, we propose a novel Dilated Temporal Relational Generative Adversarial Network (DTR-GAN) to achieve framelevel video summarization. Given a video, it can select a set of key frames, which contains the most meaningful and compact information. Specifically, DTR-GAN learns a dilated temporal relational generator and a discriminator with three-player loss in an adversarial manner. A new dilated temporal relation (DTR) unit is introduced for enhancing temporal representation capturing. The generator aims to select key frames by using DTR units to effectively exploit global multi-scale temporal context and to complement the commonly used Bi-LSTM. To ensure that the summaries capture enough key video representation from a global perspective rather than a trivial randomly shorten sequence, we present a discriminator that learns to enforce both the information completeness and compactness of summaries via a three-player loss. The three-player loss includes the generated summary loss, the random summary loss, and the real summary (ground-truth) loss, which play important roles for better regularizing the learned model to obtain useful summaries. Comprehensive experiments on two public datasets SumMe and TVSum show the superiority of our DTR-GAN over the stateof-the-art approaches.",
"title": ""
},
{
"docid": "c10bd86125db702e0839e2a3776e195b",
"text": "To solve the big topic modeling problem, we need to reduce both time and space complexities of batch latent Dirichlet allocation (LDA) algorithms. Although parallel LDA algorithms on the multi-processor architecture have low time and space complexities, their communication costs among processors often scale linearly with the vocabulary size and the number of topics, leading to a serious scalability problem. To reduce the communication complexity among processors for a better scalability, we propose a novel communication-efficient parallel topic modeling architecture based on power law, which consumes orders of magnitude less communication time when the number of topics is large. We combine the proposed communication-efficient parallel architecture with the online belief propagation (OBP) algorithm referred to as POBP for big topic modeling tasks. Extensive empirical results confirm that POBP has the following advantages to solve the big topic modeling problem: 1) high accuracy, 2) communication-efficient, 3) fast speed, and 4) constant memory usage when compared with recent state-of-the-art parallel LDA algorithms on the multi-processor architecture. Index Terms —Big topic modeling, latent Dirichlet allocation, communication complexity, multi-processor architecture, online belief propagation, power law.",
"title": ""
},
{
"docid": "d93abfdc3bc20a23e533f3ad2e30b9c9",
"text": "Over the past few years, the realm of embedded systems has expanded to include a wide variety of products, ranging from digital cameras, to sensor networks, to medical imaging systems. Consequently, engineers strive to create ever smaller and faster products, many of which have stringent power requirements. Coupled with increasing pressure to decrease costs and time-to-market, the design constraints of embedded systems pose a serious challenge to embedded systems designers. Reconfigurable hardware can provide a flexible and efficient platform for satisfying the area, performance, cost, and power requirements of many embedded systems. This article presents an overview of reconfigurable computing in embedded systems, in terms of benefits it can provide, how it has already been used, design issues, and hurdles that have slowed its adoption.",
"title": ""
},
{
"docid": "93d498adaee9070ffd608c5c1fe8e8c9",
"text": "INTRODUCTION\nFluorescence anisotropy (FA) is one of the major established methods accepted by industry and regulatory agencies for understanding the mechanisms of drug action and selecting drug candidates utilizing a high-throughput format.\n\n\nAREAS COVERED\nThis review covers the basics of FA and complementary methods, such as fluorescence lifetime anisotropy and their roles in the drug discovery process. The authors highlight the factors affecting FA readouts, fluorophore selection and instrumentation. Furthermore, the authors describe the recent development of a successful, commercially valuable FA assay for long QT syndrome drug toxicity to illustrate the role that FA can play in the early stages of drug discovery.\n\n\nEXPERT OPINION\nDespite the success in drug discovery, the FA-based technique experiences competitive pressure from other homogeneous assays. That being said, FA is an established yet rapidly developing technique, recognized by academic institutions, the pharmaceutical industry and regulatory agencies across the globe. The technical problems encountered in working with small molecules in homogeneous assays are largely solved, and new challenges come from more complex biological molecules and nanoparticles. With that, FA will remain one of the major work-horse techniques leading to precision (personalized) medicine.",
"title": ""
},
{
"docid": "72a6a7fe366def9f97ece6d1ddc46a2e",
"text": "Our work in this paper presents a prediction of quality of experience based on full reference parametric (SSIM, VQM) and application metrics (resolution, bit rate, frame rate) in SDN networks. First, we used DCR (Degradation Category Rating) as subjective method to build the training model and validation, this method is based on not only the quality of received video but also the original video but all subjective methods are too expensive, don't take place in real time and takes much time for example our method takes three hours to determine the average MOS (Mean Opinion Score). That's why we proposed novel method based on machine learning algorithms to obtain the quality of experience in an objective manner. Previous researches in this field help us to use four algorithms: Decision Tree (DT), Neural Network, K nearest neighbors KNN and Random Forest RF thanks to their efficiency. We have used two metrics recommended by VQEG group to assess the best algorithm: Pearson correlation coefficient r and Root-Mean-Square-Error RMSE. The last part of the paper describes environment based on: Weka to analyze ML algorithms, MSU tool to calculate SSIM and VQM and Mininet for the SDN simulation.",
"title": ""
},
{
"docid": "74d6c2fff4b67d05871ca0debbc4ec15",
"text": "There is great interest in developing rechargeable lithium batteries with higher energy capacity and longer cycle life for applications in portable electronic devices, electric vehicles and implantable medical devices. Silicon is an attractive anode material for lithium batteries because it has a low discharge potential and the highest known theoretical charge capacity (4,200 mAh g(-1); ref. 2). Although this is more than ten times higher than existing graphite anodes and much larger than various nitride and oxide materials, silicon anodes have limited applications because silicon's volume changes by 400% upon insertion and extraction of lithium which results in pulverization and capacity fading. Here, we show that silicon nanowire battery electrodes circumvent these issues as they can accommodate large strain without pulverization, provide good electronic contact and conduction, and display short lithium insertion distances. We achieved the theoretical charge capacity for silicon anodes and maintained a discharge capacity close to 75% of this maximum, with little fading during cycling.",
"title": ""
},
{
"docid": "e68c73806392d10c3c3fd262f6105924",
"text": "Dynamic programming (DP) is a powerful paradigm for general, nonlinear optimal control. Computing exact DP solutions is in general only possible when the process states and the control actions take values in a small discrete set. In practice, it is necessary to approximate the solutions. Therefore, we propose an algorithm for approximate DP that relies on a fuzzy partition of the state space, and on a discretization of the action space. This fuzzy Q-iteration algorithmworks for deterministic processes, under the discounted return criterion. We prove that fuzzy Q -iteration asymptotically converges to a solution that lies within a bound of the optimal solution. A bound on the suboptimality of the solution obtained in a finite number of iterations is also derived. Under continuity assumptions on the dynamics and on the reward function, we show that fuzzyQ -iteration is consistent, i.e., that it asymptotically obtains the optimal solution as the approximation accuracy increases. These properties hold both when the parameters of the approximator are updated in a synchronous fashion, and when they are updated asynchronously. The asynchronous algorithm is proven to converge at least as fast as the synchronous one. The performance of fuzzy Q iteration is illustrated in a two-link manipulator control problem. © 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "cd67a23b3ed7ab6d97a198b0e66a5628",
"text": "A growing number of children and adolescents are involved in resistance training in schools, fitness centers, and sports training facilities. In addition to increasing muscular strength and power, regular participation in a pediatric resistance training program may have a favorable influence on body composition, bone health, and reduction of sports-related injuries. Resistance training targeted to improve low fitness levels, poor trunk strength, and deficits in movement mechanics can offer observable health and fitness benefits to young athletes. However, pediatric resistance training programs need to be well-designed and supervised by qualified professionals who understand the physical and psychosocial uniqueness of children and adolescents. The sensible integration of different training methods along with the periodic manipulation of programs design variables over time will keep the training stimulus effective, challenging, and enjoyable for the participants.",
"title": ""
},
{
"docid": "03be8a60e1285d62c34b982ddf1bcf58",
"text": "A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity and policy dependence in place cells suggests that the representation is not purely spatial. We approach this puzzle from a reinforcement learning perspective: what kind of spatial representation is most useful for maximizing future reward? We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. Furthermore, we argue that entorhinal grid cells encode a low-dimensionality basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.",
"title": ""
},
{
"docid": "f05f6d9eeff0b492b74e6ab18c4707ba",
"text": "Communication is an interactive, complex, structured process involving agents that are capable of drawing conclusions from the information they have available about some real-life situations. Such situations are generally characterized as being imperfect. In this paper, we aim to address learning from the perspective of the communication between agents. To learn a collection of propositions concerning some situation is to incorporate it within one's knowledge about that situation. That is, the key factor in this activity is for the goal agent, where agents may switch role if appropriate, to integrate the information offered with what it already knows. This may require a process of belief revision, which suggests that the process of incorporation of new information should be modeled nonmonotonically. We shall employ for reasoning a three-valued based nonmonotonic logic that formalizes some aspects of revisable reasoning and it is accessible to implementation. The logic is sound and complete. A theorem-prover of the logic has successfully been implemented.",
"title": ""
},
{
"docid": "9e65315d4e241dc8d4ea777247f7c733",
"text": "A long-standing focus on compliance has traditionally constrained development of fundamental design changes for Electronic Health Records (EHRs). We now face a critical need for such innovation, as personalization and data science prompt patients to engage in the details of their healthcare and restore agency over their medical data. In this paper, we propose MedRec: a novel, decentralized record management system to handle EHRs, using blockchain technology. Our system gives patients a comprehensive, immutable log and easy access to their medical information across providers and treatment sites. Leveraging unique blockchain properties, MedRec manages authentication, confidentiality, accountability and data sharing—crucial considerations when handling sensitive information. A modular design integrates with providers' existing, local data storage solutions, facilitating interoperability and making our system convenient and adaptable. We incentivize medical stakeholders (researchers, public health authorities, etc.) to participate in the network as blockchain “miners”. This provides them with access to aggregate, anonymized data as mining rewards, in return for sustaining and securing the network via Proof of Work. MedRec thus enables the emergence of data economics, supplying big data to empower researchers while engaging patients and providers in the choice to release metadata. The purpose of this paper is to expose, in preparation for field tests, a working prototype through which we analyze and discuss our approach and the potential for blockchain in health IT and research.",
"title": ""
},
{
"docid": "04f4058d37a33245abf8ed9acd0af35d",
"text": "After being introduced in 2009, the first fully homomorphic encryption (FHE) scheme has created significant excitement in academia and industry. Despite rapid advances in the last 6 years, FHE schemes are still not ready for deployment due to an efficiency bottleneck. Here we introduce a custom hardware accelerator optimized for a class of reconfigurable logic to bring LTV based somewhat homomorphic encryption (SWHE) schemes one step closer to deployment in real-life applications. The accelerator we present is connected via a fast PCIe interface to a CPU platform to provide homomorphic evaluation services to any application that needs to support blinded computations. Specifically we introduce a number theoretical transform based multiplier architecture capable of efficiently handling very large polynomials. When synthesized for the Xilinx Virtex 7 family the presented architecture can compute the product of large polynomials in under 6.25 msec making it the fastest multiplier design of its kind currently available in the literature and is more than 102 times faster than a software implementation. Using this multiplier we can compute a relinearization operation in 526 msec. When used as an accelerator, for instance, to evaluate the AES block cipher, we estimate a per block homomorphic evaluation performance of 442 msec yielding performance gains of 28.5 and 17 times over similar CPU and GPU implementations, respectively.",
"title": ""
},
{
"docid": "0c7c179f2f86e7289910cee4ca583e4b",
"text": "This paper presents an algorithm for fingerprint image restoration using Digital Reaction-Diffusion System (DRDS). The DRDS is a model of a discrete-time discrete-space nonlinear reaction-diffusion dynamical system, which is useful for generating biological textures, patterns and structures. This paper focuses on the design of a fingerprint restoration algorithm that combines (i) a ridge orientation estimation technique using an iterative coarse-to-fine processing strategy and (ii) an adaptive DRDS having a capability of enhancing low-quality fingerprint images using the estimated ridge orientation. The phase-only image matching technique is employed for evaluating the similarity between an original fingerprint image and a restored image. The proposed algorithm may be useful for person identification applications using fingerprint images. key words: reaction-diffusion system, pattern formation, digital signal processing, digital filters, fingerprint restoration",
"title": ""
},
{
"docid": "4b3d890a8891cd8c84713b1167383f6f",
"text": "The present research tested the hypothesis that concepts of gratitude are prototypically organized and explored whether lay concepts of gratitude are broader than researchers' concepts of gratitude. In five studies, evidence was found that concepts of gratitude are indeed prototypically organized. In Study 1, participants listed features of gratitude. In Study 2, participants reliably rated the centrality of these features. In Studies 3a and 3b, participants perceived that a hypothetical other was experiencing more gratitude when they read a narrative containing central as opposed to peripheral features. In Study 4, participants remembered more central than peripheral features in gratitude narratives. In Study 5a, participants generated more central than peripheral features when they wrote narratives about a gratitude incident, and in Studies 5a and 5b, participants generated both more specific and more generalized types of gratitude in similar narratives. Throughout, evidence showed that lay conceptions of gratitude are broader than current research definitions.",
"title": ""
},
{
"docid": "c15369f923be7c8030cc8f2b1f858ced",
"text": "An important goal of scientific data analysis is to understand the behavior of a system or process based on a sample of the system. In many instances it is possible to observe both input parameters and system outputs, and characterize the system as a high-dimensional function. Such data sets arise, for instance, in large numerical simulations, as energy landscapes in optimization problems, or in the analysis of image data relating to biological or medical parameters. This paper proposes an approach to analyze and visualizing such data sets. The proposed method combines topological and geometric techniques to provide interactive visualizations of discretely sampled high-dimensional scalar fields. The method relies on a segmentation of the parameter space using an approximate Morse-Smale complex on the cloud of point samples. For each crystal of the Morse-Smale complex, a regression of the system parameters with respect to the output yields a curve in the parameter space. The result is a simplified geometric representation of the Morse-Smale complex in the high dimensional input domain. Finally, the geometric representation is embedded in 2D, using dimension reduction, to provide a visualization platform. The geometric properties of the regression curves enable the visualization of additional information about each crystal such as local and global shape, width, length, and sampling densities. The method is illustrated on several synthetic examples of two dimensional functions. Two use cases, using data sets from the UCI machine learning repository, demonstrate the utility of the proposed approach on real data. Finally, in collaboration with domain experts the proposed method is applied to two scientific challenges. The analysis of parameters of climate simulations and their relationship to predicted global energy flux and the concentrations of chemical species in a combustion simulation and their integration with temperature.",
"title": ""
}
] |
scidocsrr
|
9dca7788703754f3df30cadba621e0c4
|
Are blockchains immune to all malicious attacks ?
|
[
{
"docid": "0cf81998c0720405e2197c62afa08ee7",
"text": "User-generated online reviews can play a significant role in the success of retail products, hotels, restaurants, etc. However, review systems are often targeted by opinion spammers who seek to distort the perceived quality of a product by creating fraudulent reviews. We propose a fast and effective framework, FRAUDEAGLE, for spotting fraudsters and fake reviews in online review datasets. Our method has several advantages: (1) it exploits the network effect among reviewers and products, unlike the vast majority of existing methods that focus on review text or behavioral analysis, (2) it consists of two complementary steps; scoring users and reviews for fraud detection, and grouping for visualization and sensemaking, (3) it operates in a completely unsupervised fashion requiring no labeled data, while still incorporating side information if available, and (4) it is scalable to large datasets as its run time grows linearly with network size. We demonstrate the effectiveness of our framework on synthetic and real datasets; where FRAUDEAGLE successfully reveals fraud-bots in a large online app review database. Introduction The Web has greatly enhanced the way people perform certain activities (e.g. shopping), find information, and interact with others. Today many people read/write reviews on merchant sites, blogs, forums, and social media before/after they purchase products or services. Examples include restaurant reviews on Yelp, product reviews on Amazon, hotel reviews on TripAdvisor, and many others. Such user-generated content contains rich information about user experiences and opinions, which allow future potential customers to make better decisions about spending their money, and also help merchants improve their products, services, and marketing. Since online reviews can directly influence customer purchase decisions, they are crucial to the success of businesses. While positive reviews with high ratings can yield financial gains, negative reviews can damage reputation and cause monetary loss. This effect is magnified as the information spreads through the Web (Hitlin 2003; Mendoza, Poblete, and Castillo 2010). As a result, online review systems are attractive targets for opinion fraud. Opinion fraud involves reviewers (often paid) writing bogus reviews (Kost May 2012; Copyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Streitfeld August 2011). These spam reviews come in two flavors: defaming-spam which untruthfully vilifies, or hypespam that deceitfully promotes the target product. The opinion fraud detection problem is to spot the fake reviews in online sites, given all the reviews on the site, and for each review, its text, its author, the product it was written for, timestamp of posting, and its star-rating. Typically no user profile information is available (or is self-declared and cannot be trusted), while more side information for products (e.g. price, brand), and for reviews (e.g. number of (helpful) feedbacks) could be available depending on the site. Detecting opinion fraud, as defined above, is a non-trivial and challenging problem. Fake reviews are often written by experienced professionals who are paid to write high quality, believable reviews. As a result, it is difficult for an average potential customer to differentiate bogus reviews from truthful ones, just by looking at individual reviews text(Ott et al. 2011). As such, manual labeling of reviews is hard and ground truth information is often unavailable, which makes training supervised models less attractive for this problem. Summary of previous work. Previous attempts at solving the problem use several heuristics, such as duplicated reviews (Jindal and Liu 2008), or acquire bogus reviews from non-experts (Ott et al. 2011), to generate pseudo-ground truth, or a reference dataset. This data is then used for learning classification models together with carefully engineered features. One downside of such techniques is that they do not generalize: one needs to collect new data and train a new model for review data from a different domain, e.g., hotel vs. restaurant reviews. Moreover feature selection becomes a tedious sub-problem, as datasets from different domains might exhibit different characteristics. Other feature-based proposals include (Lim et al. 2010; Mukherjee, Liu, and Glance 2012). A large body of work on fraud detection relies on review text information (Jindal and Liu 2008; Ott et al. 2011; Feng, Banerjee, and Choi 2012) or behavioral evidence (Lim et al. 2010; Xie et al. 2012; Feng et al. 2012), and ignore the connectivity structure of review data. On the other hand, the network of reviewers and products contains rich information that implicitly represents correlations among these entities. The review network is also invaluable for detecting teams of fraudsters that operate collaboratively on targeted products. Our contributions. In this work we propose an unsuperProceedings of the Seventh International AAAI Conference on Weblogs and Social Media",
"title": ""
}
] |
[
{
"docid": "3cd3ac867951fb8153dd979ad64c826e",
"text": "Based on information-theoretic security, physical-layer security in an optical code division multiple access (OCDMA) system is analyzed. For the first time, the security leakage factor is employed to evaluate the physicallayer security level, and the safe receiving distance is used to measure the security transmission range. By establishing the wiretap channel model of a coherent OCDMA system, the influences of the extraction location, the extraction ratio, the number of active users, and the length of the code on the physical-layer security are analyzed quantitatively. The physical-layer security and reliability of the coherent OCDMA system are evaluated under the premise of satisfying the legitimate user’s bit error rate and the security leakage factor. The simulation results show that the number of users must be in a certain interval in order to meet the legitimate user’s reliability and security. Furthermore, the security performance of the coherent OCDMA-based wiretap channel can be improved by increasing the code length.",
"title": ""
},
{
"docid": "4828e830d440cb7a2c0501952033da2f",
"text": "This paper presents a current-mode control non-inverting buck-boost converter. The proposed circuit is controlled by the current mode and operated in three operation modes which are buck, buck-boost, and boost mode. The operation mode is automatically determined by the ratio between the input and output voltages. The proposed circuit is simulated by HSPICE with 0.5 um standard CMOS parameters. Its input voltage range is 2.5–5 V, and the output voltage range is 1.5–5 V. The maximum efficiency is 92% when it operates in buck mode.",
"title": ""
},
{
"docid": "fce1fd47f04105be554465c7ae4245db",
"text": "A model for evaluating the lifetime of closed electrical contacts was described. The model is based on the diffusion of oxidizing agent into the contact zone. The results showed that there is good agreement between the experimental data as reported by different authors in the literature and the derived expressions for the limiting value of contact lifetime.",
"title": ""
},
{
"docid": "14c981a63e34157bb163d4586502a059",
"text": "In this paper, we investigate an angle of arrival (AoA) and angle of departure (AoD) estimation algorithm for sparse millimeter wave multiple-input multiple-output (MIMO) channels. The analytical channel model whose use we advocate here is the beam space (or virtual) MIMO channel representation. By leveraging the beam space MIMO concept, we characterize probabilistic channel priors under an analog precoding and combining constraints. This investigation motivates Bayesian inference approaches to virtual AoA and AoD estimation. We divide the estimation task into downlink sounding for AoA estimation and uplink sounding for AoD estimation. A belief propagation (BP)-type algorithm is adopted, leading to computationally efficient approximate message passing (AMP) and approximate log-likelihood ratio testing (ALLRT) algorithms. Numerical results demonstrate that the proposed algorithm outperforms the conventional AMP in terms of the AoA and AoD estimation accuracy for the sparse millimeter wave MIMO channel.",
"title": ""
},
{
"docid": "406fab96a8fd49f4d898a9735ee1512f",
"text": "An otolaryngology phenol applicator kit can be successfully and safely used in the performance of chemical matricectomy. The applicator kit provides a convenient way to apply phenol to the nail matrix precisely and efficiently, whereas minimizing both the risk of application to nonmatrix surrounding soft tissue and postoperative recovery time.Given the smaller size of the foam-tipped applicator, we feel that this is a more precise tool than traditional cotton-tipped applicators for chemical matricectomy. Particularly with regard to lower extremity nail ablation and matricectomy, minimizing soft tissue inflammation could in turn reduce the risk of postoperative infections, decrease recovery time, as well and make for a more positive overall patient experience.",
"title": ""
},
{
"docid": "f2b3f12251d636fe0ac92a2965826525",
"text": "Today, not only Internet companies such as Google, Facebook or Twitter do have Big Data but also Enterprise Information Systems store an ever growing amount of data (called Big Enterprise Data in this paper). In a classical SAP system landscape a central data warehouse (SAP BW) is used to integrate and analyze all enterprise data. In SAP BW most of the business logic required for complex analytical tasks (e.g., a complex currency conversion) is implemented in the application layer on top of a standard relational database. While being independent from the underlying database when using such an architecture, this architecture has two major drawbacks when analyzing Big Enterprise Data: (1) algorithms in ABAP do not scale with the amount of data and (2) data shipping is required. To this end, we present a novel programming language called SQLScript to efficiently support complex and scalable analytical tasks inside SAP’s new main-memory database HANA. SQLScript provides two major extensions to the SQL dialect of SAP HANA: A functional and a procedural extension. While the functional extension allows the definition of scalable analytical tasks on Big Enterprise Data, the procedural extension provides imperative constructs to orchestrate the analytical tasks. The major contributions of this paper are two novel functional extensions: First, an extended version of the MapReduce programming model for supporting parallelizable user-defined functions (UDFs). Second, compared to recursion in the SQL standard, a generalized version of recursion to support graph analytics as well as machine learning tasks.",
"title": ""
},
{
"docid": "0d75a194abf88a0cbf478869dc171794",
"text": "As a promising way for heterogeneous data analytics, consensus clustering has attracted increasing attention in recent decades. Among various excellent solutions, the co-association matrix based methods form a landmark, which redefines consensus clustering as a graph partition problem. Nevertheless, the relatively high time and space complexities preclude it from wide real-life applications. We, therefore, propose Spectral Ensemble Clustering (SEC) to leverage the advantages of co-association matrix in information integration but run more efficiently. We disclose the theoretical equivalence between SEC and weighted K-means clustering, which dramatically reduces the algorithmic complexity. We also derive the latent consensus function of SEC, which to our best knowledge is the first to bridge co-association matrix based methods to the methods with explicit global objective functions. Further, we prove in theory that SEC holds the robustness, generalizability, and convergence properties. We finally extend SEC to meet the challenge arising from incomplete basic partitions, based on which a row-segmentation scheme for big data clustering is proposed. Experiments on various real-world data sets in both ensemble and multi-view clustering scenarios demonstrate the superiority of SEC to some state-of-the-art methods. In particular, SEC seems to be a promising candidate for big data clustering.",
"title": ""
},
{
"docid": "c45344ee6d7feb5a9bb15e32f63217f8",
"text": "In this paper athree-stage pulse generator architecture capable of generating high voltage, high current pulses is reported and system issues are presented. Design choices and system dynamics are explained both qualitatively and quantitatively with discussion sections followed by the presentation of closed-form expressions and numerical analysis that provide insight into the system's operation. Analysis targeted at optimizing performance focuses on diode opening switch pumping, energy efficiency, and compensation of parasitic reactances. A compact system based on these design guidelines has been built to output 8 kV, 5 ns pulses into 50 Ω. Output risetimes below 1 ns have been achieved using two different compression techniques. At only 1.5 kg, this light and compact system shows promise for a variety of pulsed power applications requiring the long lifetime, low jitter performance of a solid state pulse generator that can produce fast, high voltage pulses at high repetition rates.",
"title": ""
},
{
"docid": "6291f21727c70d3455a892a8edd3b18c",
"text": "Given a single column of values, existing approaches typically employ regex-like rules to detect errors by finding anomalous values inconsistent with others. Such techniques make local decisions based only on values in the given input column, without considering a more global notion of compatibility that can be inferred from large corpora of clean tables. We propose \\sj, a statistics-based technique that leverages co-occurrence statistics from large corpora for error detection, which is a significant departure from existing rule-based methods. Our approach can automatically detect incompatible values, by leveraging an ensemble of judiciously selected generalization languages, each of which uses different generalizations and is sensitive to different types of errors. Errors so detected are based on global statistics, which is robust and aligns well with human intuition of errors. We test \\sj on a large set of public Wikipedia tables, as well as proprietary enterprise Excel files. While both of these test sets are supposed to be of high-quality, \\sj makes surprising discoveries of over tens of thousands of errors in both cases, which are manually verified to be of high precision (over 0.98). Our labeled benchmark set on Wikipedia tables is released for future research.",
"title": ""
},
{
"docid": "5d866d630f78bb81b5ce8d3dae2521ee",
"text": "In present-day high-performance electronic components, the generated heat loads result in unacceptably high junction temperatures and reduced component lifetimes. Thermoelectric modules can, in principle, enhance heat removal and reduce the temperatures of such electronic devices. However, state-of-the-art bulk thermoelectric modules have a maximum cooling flux qmax of only about 10 W cm(-2), while state-of-the art commercial thin-film modules have a qmax <100 W cm(-2). Such flux values are insufficient for thermal management of modern high-power devices. Here we show that cooling fluxes of 258 W cm(-2) can be achieved in thin-film Bi2Te3-based superlattice thermoelectric modules. These devices utilize a p-type Sb2Te3/Bi2Te3 superlattice and n-type δ-doped Bi2Te3-xSex, both of which are grown heteroepitaxially using metalorganic chemical vapour deposition. We anticipate that the demonstration of these high-cooling-flux modules will have far-reaching impacts in diverse applications, such as advanced computer processors, radio-frequency power devices, quantum cascade lasers and DNA micro-arrays.",
"title": ""
},
{
"docid": "6fac5129579a54a8320ea84df29a0393",
"text": "Lewis G. Halsey is in the Department of Life Sciences, University of Roehampton, London, UK; Douglas Curran-Everett is in the Division of Biostatistics and Bioinformatics, National Jewish Health, and the Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Denver, Denver, Colorado, USA; Sarah L. Vowler is at Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge, UK; and Gordon B. Drummond is at the University of Edinburgh, Edinburgh, UK. e-mail: [email protected] The fickle P value generates irreproducible results",
"title": ""
},
{
"docid": "c15f36dccebee50056381c41e6ddb2dc",
"text": "Instance-level object segmentation is an important yet under-explored task. Most of state-of-the-art methods rely on region proposal methods to extract candidate segments and then utilize object classification to produce final results. Nonetheless, generating reliable region proposals itself is a quite challenging and unsolved task. In this work, we propose a Proposal-Free Network (PFN) to address the instance-level object segmentation problem, which outputs the numbers of instances of different categories and the pixel-level information on i) the coordinates of the instance bounding box each pixel belongs to, and ii) the confidences of different categories for each pixel, based on pixel-to-pixel deep convolutional neural network. All the outputs together, by using any off-the-shelf clustering method for simple post-processing, can naturally generate the ultimate instance-level object segmentation results. The whole PFN can be easily trained without the requirement of a proposal generation stage. Extensive evaluations on the challenging PASCAL VOC 2012 semantic segmentation benchmark demonstrate the effectiveness of the proposed PFN solution without relying on any proposal generation methods.",
"title": ""
},
{
"docid": "e5687e8ac3eb1fbac18d203c049d9446",
"text": "Information security policy compliance is one of the key concerns that face organizations today. Although, technical and procedural security measures help improve information security, there is an increased need to accommodate human, social and organizational factors. While employees are considered the weakest link in information security domain, they also are assets that organizations need to leverage effectively. Employees' compliance with Information Security Policies (ISPs) is critical to the success of an information security program. The purpose of this research is to develop a measurement tool that provides better measures for predicting and explaining employees' compliance with ISPs by examining the role of information security awareness in enhancing employees' compliance with ISPs. The study is the first to address compliance intention from a users' perspective. Overall, analysis results indicate strong support for the proposed instrument and represent an early confirmation for the validation of the underlying theoretical model.",
"title": ""
},
{
"docid": "45ce30113e80cc6a28f243b0d1661c58",
"text": "We describe and compare several methods for generating game character controllers that mimic the playing style of a particular human player, or of a population of human players, across video game levels. Similarity in playing style is measured through an evaluation framework, that compares the play trace of one or several human players with the punctuated play trace of an AI player. The methods that are compared are either hand-coded, direct (based on supervised learning) or indirect (based on maximising a similarity measure). We find that a method based on neuroevolution performs best both in terms of the instrumental similarity measure and in phenomenological evaluation by human spectators. A version of the classic platform game “Super Mario Bros” is used as the testbed game in this study but the methods are applicable to other games that are based on character movement in space.",
"title": ""
},
{
"docid": "9c1687323661ccb6bf2151824edc4260",
"text": "In this work we present the design of a digitally controlled ring type oscillator in 0.5 μm CMOS technology for a low-cost and portable radio-frequency diathermy (RFD) device. The oscillator circuit is composed by a low frequency ring oscillator (LFRO), a voltage controlled ring oscillator (VCRO), and a logic control. The digital circuit generates an input signal for the LFO, which generates a voltage ramp that controls the oscillating output signal of the VCRO in the range of 500 KHz to 1 MHz. Simulation results show that the proposed circuit exhibits controllable output characteristics in the range of 500 KHz–1 MHz, with low power consumption and low phase noise, making it suitable for a portable RFD device.",
"title": ""
},
{
"docid": "f267e8cfbe10decbe16fa83c97e76049",
"text": "The growing prevalence of e-learning systems and on-line courses has made educational material widely accessible to students of varying abilities, backgrounds and styles. There is thus a growing need to accomodate for individual differences in such e-learning systems. This paper presents a new algorithm for personliazing educational content to students that combines collaborative filtering algorithms with social choice theory. The algorithm constructs a “difficulty” ranking over questions for a target student by aggregating the ranking of similar students, as measured by different aspects of their performance on common past questions, such as grades, number of retries, and time spent solving questions. It infers a difficulty ranking directly over the questions for a target student, rather than ordering them according to predicted performance, which is prone to error. The algorithm was tested on two large real world data sets containing tens of thousands of students and a million records. Its performance was compared to a variety of personalization methods as well as a non-personalized method that relied on a domain expert. It was able to significantly outperform all of these approaches according to standard information retrieval metrics. Our approach can potentially be used to support teachers in tailoring problem sets and exams to individual students and students in informing them about areas they may need to strengthen.",
"title": ""
},
{
"docid": "93e93d2278706638859f5f4b1601bfa6",
"text": "To acquire accurate, real-time hyperspectral images with high spatial resolution, we develop two types of low-cost, lightweight Whisk broom hyperspectral sensors that can be loaded onto lightweight unmanned autonomous vehicle (UAV) platforms. A system is composed of two Mini-Spectrometers, a polygon mirror, references for sensor calibration, a GPS sensor, a data logger and a power supply. The acquisition of images with high spatial resolution is realized by a ground scanning along a direction perpendicular to the flight direction based on the polygon mirror. To cope with the unstable illumination condition caused by the low-altitude observation, skylight radiation and dark current are acquired in real-time by the scanning structure. Another system is composed of 2D optical fiber array connected to eight Mini-Spectrometers and a telephoto lens, a convex lens, a micro mirror, a GPS sensor, a data logger and a power supply. The acquisition of images is realized by a ground scanning based on the rotation of the micro mirror.",
"title": ""
},
{
"docid": "91f89990f9d41d3a92cbff38efc56b57",
"text": "ID3 algorithm was a classic classification of data mining. It always selected the attribute with many values. The attribute with many values wasn't the correct one, and it always created wrong classification. In the application of intrusion detection system, it would created fault alarm and omission alarm. To this fault, an improved decision tree algorithm was proposed. Though improvement of information gain formula, the correct attribute would be got. The decision tree was created after the data collected classified correctly. The tree would be not high and has a few of branches. The rule set would be got based on the decision tree. Experimental results showed the effectiveness of the algorithm, false alarm rate and omission rate decreased, increasing the detection rate and reducing the space consumption.",
"title": ""
},
{
"docid": "f519e878b3aae2f0024978489db77425",
"text": "In this paper, we propose a new halftoning scheme that preserves the structure and tone similarities of images while maintaining the simplicity of Floyd-Steinberg error diffusion. Our algorithm is based on the Floyd-Steinberg error diffusion algorithm, but the threshold modulation part is modified to improve the over-blurring issue of the Floyd-Steinberg error diffusion algorithm. By adding some structural information on images obtained using the Laplacian operator to the quantizer thresholds, the structural details in the textured region can be preserved. The visual artifacts of the original error diffusion that is usually visible in the uniform region is greatly reduced by adding noise to the thresholds. This is especially true for the low contrast region because most existing error diffusion algorithms cannot preserve structural details but our algorithm preserves them clearly using threshold modulation. Our algorithm has been evaluated using various types of images including some with the low contrast region and assessed numerically using the MSSIM measure with other existing state-of-art halftoning algorithms. The results show that our method performs better than existing approaches both in the textured region and in the uniform region with the faster computation speed.",
"title": ""
},
{
"docid": "583d2f754a399e8446855b165407f6ee",
"text": "In this work, classification of cellular structures in the high resolutional histopathological images and the discrimination of cellular and non-cellular structures have been investigated. The cell classification is a very exhaustive and time-consuming process for pathologists in medicine. The development of digital imaging in histopathology has enabled the generation of reasonable and effective solutions to this problem. Morever, the classification of digital data provides easier analysis of cell structures in histopathological data. Convolutional neural network (CNN), constituting the main theme of this study, has been proposed with different spatial window sizes in RGB color spaces. Hence, to improve the accuracies of classification results obtained by supervised learning methods, spatial information must also be considered. So, spatial dependencies of cell and non-cell pixels can be evaluated within different pixel neighborhoods in this study. In the experiments, the CNN performs superior than other pixel classification methods including SVM and k-Nearest Neighbour (k-NN). At the end of this paper, several possible directions for future research are also proposed.",
"title": ""
}
] |
scidocsrr
|
a147482deaac13986d5360193423da26
|
Sex differences in response to visual sexual stimuli: a review.
|
[
{
"docid": "9b8d4b855bab5e2fdcadd1fe1632f197",
"text": "Men report more permissive sexual attitudes and behavior than do women. This experiment tested whether these differences might result from false accommodation to gender norms (distorted reporting consistent with gender stereotypes). Participants completed questionnaires under three conditions. Sex differences in self-reported sexual behavior were negligible in a bogus pipeline condition in which participants believed lying could be detected, moderate in an anonymous condition, and greatest in an exposure threat condition in which the experimenter could potentially view participants responses. This pattern was clearest for behaviors considered less acceptable for women than men (e.g., masturbation, exposure to hardcore & softcore erotica). Results suggest that some sex differences in self-reported sexual behavior reflect responses influenced by normative expectations for men and women.",
"title": ""
},
{
"docid": "a6a7770857964e96f98bd4021d38f59f",
"text": "During human evolutionary history, there were \"trade-offs\" between expending time and energy on child-rearing and mating, so both men and women evolved conditional mating strategies guided by cues signaling the circumstances. Many short-term matings might be successful for some men; others might try to find and keep a single mate, investing their effort in rearing her offspring. Recent evidence suggests that men with features signaling genetic benefits to offspring should be preferred by women as short-term mates, but there are trade-offs between a mate's genetic fitness and his willingness to help in child-rearing. It is these circumstances and the cues that signal them that underlie the variation in short- and long-term mating strategies between and within the sexes.",
"title": ""
}
] |
[
{
"docid": "1c832140fce684c68fd91779d62596e3",
"text": "The safety and antifungal efficacy of amphotericin B lipid complex (ABLC) were evaluated in 556 cases of invasive fungal infection treated through an open-label, single-patient, emergency-use study of patients who were refractory to or intolerant of conventional antifungal therapy. All 556 treatment episodes were evaluable for safety. During the course of ABLC therapy, serum creatinine levels significantly decreased from baseline (P < .02). Among 162 patients with serum creatinine values > or = 2.5 mg/dL at the start of ABLC therapy (baseline), the mean serum creatinine value decreased significantly from the first week through the sixth week (P < or = .0003). Among the 291 mycologically confirmed cases evaluable for therapeutic response, there was a complete or partial response to ABLC in 167 (57%), including 42% (55) of 130 cases of aspergillosis, 67% (28) of 42 cases of disseminated candidiasis, 71% (17) of 24 cases of zygomycosis, and 82% (9) of 11 cases of fusariosis. Response rates varied according to the pattern of invasive fungal infection, underlying condition, and reason for enrollment (intolerance versus progressive infection). These findings support the use of ABLC in the treatment of invasive fungal infections in patients who are intolerant of or refractory to conventional antifungal therapy.",
"title": ""
},
{
"docid": "e92299720be4d028b4a7d726c99bc216",
"text": "Nowadays terahertz spectroscopy is a well-established technique and recent progresses in technology demonstrated that this new technique is useful for both fundamental research and industrial applications. Varieties of applications such as imaging, non destructive testing, quality control are about to be transferred to industry supported by permanent improvements from basic research. Since chemometrics is today routinely applied to IR spectroscopy, we discuss in this paper the advantages of using chemometrics in the framework of terahertz spectroscopy. Different analytical procedures are illustrates. We conclude that advanced data processing is the key point to validate routine terahertz spectroscopy as a new reliable analytical technique.",
"title": ""
},
{
"docid": "b27ab468a885a3d52ec2081be06db2ef",
"text": "The beautification of human photos usually requires professional editing softwares, which are difficult for most users. In this technical demonstration, we propose a deep face beautification framework, which is able to automatically modify the geometrical structure of a face so as to boost the attractiveness. A learning based approach is adopted to capture the underlying relations between the facial shape and the attractiveness via training the Deep Beauty Predictor (DBP). Relying on the pre-trained DBP, we construct the BeAuty SHaper (BASH) to infer the \"flows\" of landmarks towards the maximal aesthetic level. BASH modifies the facial landmarks with the direct guidance of the beauty score estimated by DBP.",
"title": ""
},
{
"docid": "1abcede6d3044e5550df404cfb7c87a4",
"text": "There is intense interest in graphene in fields such as physics, chemistry, and materials science, among others. Interest in graphene's exceptional physical properties, chemical tunability, and potential for applications has generated thousands of publications and an accelerating pace of research, making review of such research timely. Here is an overview of the synthesis, properties, and applications of graphene and related materials (primarily, graphite oxide and its colloidal suspensions and materials made from them), from a materials science perspective.",
"title": ""
},
{
"docid": "ee25ce281929eb63ce5027060be799c9",
"text": "Enfin, je ne peux terminer ces quelques lignes sans remercier ma famille qui dans l'ombre m'apporte depuis si longtemps tout le soutien qui m'est nécessaire, et spécialement, mes pensées vont à Magaly. Un peu d'histoire… Dès les années 60, les données informatisées dans les organisations ont pris une importance qui n'a cessé de croître. Les systèmes informatiques gérant ces données sont utilisés essentiellement pour faciliter l'activité quotidienne des organisations et pour soutenir les prises de décision. La démocratisation de la micro-informatique dans les années 80 a permis un important développement de ces systèmes augmentant considérablement les quantités de données 1 2 informatisées disponibles. Face aux évolutions nombreuses et rapides qui s'imposent aux organisations, la prise de décision est devenue dès les années 90 une activité primordiale nécessitant la mise en place de systèmes dédiés efficaces [Inmon, 1994]. A partir de ces années, les éditeurs de logiciels ont proposé des outils facilitant l'analyse des données pour soutenir les prises de décision. Les tableurs sont probablement les premiers outils qui ont été utilisés pour analyser les données à des fins décisionnelles. Ils ont été complétés par des outils facilitant l'accès aux données pour les décideurs au travers d'interfaces graphiques dédiées au « requêtage » ; citons le logiciel Business Objects qui reste aujourd'hui encore l'un des plus connus. Le développement de systèmes dédiés à la prise de décision a vu naître des outils E.T.L. (« Extract-Transform-Load ») destinés à faciliter l'extraction et la transformation de données décisionnelles. Dès la fin des années 90, les acteurs importants tels que Microsoft, Oracle, IBM, SAP sont intervenus sur ce nouveau marché en faisant évoluer leurs outils et en acquérant de nombreux logiciels spécialisés ; par exemple, SAP vient d'acquérir Business Objects pour 4,8 milliards d'euros. Ils disposent aujourd'hui d'offres complètes intégrant l'ensemble de la chaîne décisionnelle : E.T.L., stockage (S.G.B.D.), restitution et analyse. Cette dernière décennie connaît encore une évolution marquante avec le développement d'une offre issue du monde du logiciel libre (« open source ») qui atteint aujourd'hui une certaine maturité (Talend 3 , JPalo 4 , Jasper 5). Dominée par les outils du marché, l'informatique décisionnelle est depuis le milieu des années 90 un domaine investi par le monde de la recherche au travers des concepts d'entrepôt de données (« data warehouse ») [Widom, 1995] [Chaudhury, et al., 1997] et d'OLAP (« On-Line Analytical Processing ») [Codd, et al., 1993]. D'abord diffusées dans …",
"title": ""
},
{
"docid": "bc3aaa01d7e817a97c6709d193a74f9e",
"text": "Monolingual dictionaries are widespread and semantically rich resources. This paper presents a simple model that learns to compute word embeddings by processing dictionary definitions and trying to reconstruct them. It exploits the inherent recursivity of dictionaries by encouraging consistency between the representations it uses as inputs and the representations it produces as outputs. The resulting embeddings are shown to capture semantic similarity better than regular distributional methods and other dictionary-based methods. In addition, the method shows strong performance when trained exclusively on dictionary data and generalizes in one shot.",
"title": ""
},
{
"docid": "e34d244a395a753b0cb97f8535b56add",
"text": "We propose Quadruplet Convolutional Neural Networks (Quad-CNN) for multi-object tracking, which learn to associate object detections across frames using quadruplet losses. The proposed networks consider target appearances together with their temporal adjacencies for data association. Unlike conventional ranking losses, the quadruplet loss enforces an additional constraint that makes temporally adjacent detections more closely located than the ones with large temporal gaps. We also employ a multi-task loss to jointly learn object association and bounding box regression for better localization. The whole network is trained end-to-end. For tracking, the target association is performed by minimax label propagation using the metric learned from the proposed network. We evaluate performance of our multi-object tracking algorithm on public MOT Challenge datasets, and achieve outstanding results.",
"title": ""
},
{
"docid": "9276dc2798f90483025ea01e69c51001",
"text": "Convolutional Neural Network (CNN) was firstly introduced in Computer Vision for image recognition by LeCun et al. in 1989. Since then, it has been widely used in image recognition and classification tasks. The recent impressive success of Krizhevsky et al. in ILSVRC 2012 competition demonstrates the significant advance of modern deep CNN on image classification task. Inspired by his work, many recent research works have been concentrating on understanding CNN and extending its application to more conventional computer vision tasks. Their successes and lessons have promoted the development of both CNN and vision science. This article makes a survey of recent progress in CNN since 2012. We will introduce the general architecture of a modern CNN and make insights into several typical CNN incarnations which have been studied extensively. We will also review the efforts to understand CNNs and review important applications of CNNs in computer vision tasks.",
"title": ""
},
{
"docid": "c7d71b7bb07f62f4b47d87c9c4bae9b3",
"text": "Smart contracts are full-fledged programs that run on blockchains (e.g., Ethereum, one of the most popular blockchains). In Ethereum, gas (in Ether, a cryptographic currency like Bitcoin) is the execution fee compensating the computing resources of miners for running smart contracts. However, we find that under-optimized smart contracts cost more gas than necessary, and therefore the creators or users will be overcharged. In this work, we conduct the first investigation on Solidity, the recommended compiler, and reveal that it fails to optimize gas-costly programming patterns. In particular, we identify 7 gas-costly patterns and group them to 2 categories. Then, we propose and develop GASPER, a new tool for automatically locating gas-costly patterns by analyzing smart contracts' bytecodes. The preliminary results on discovering 3 representative patterns from 4,240 real smart contracts show that 93.5%, 90.1% and 80% contracts suffer from these 3 patterns, respectively.",
"title": ""
},
{
"docid": "361e874cccb263b202155ef92e502af3",
"text": "String similarity join is an important operation in data integration and cleansing that finds similar string pairs from two collections of strings. More than ten algorithms have been proposed to address this problem in the recent two decades. However, existing algorithms have not been thoroughly compared under the same experimental framework. For example, some algorithms are tested only on specific datasets. This makes it rather difficult for practitioners to decide which algorithms should be used for various scenarios. To address this problem, in this paper we provide a comprehensive survey on a wide spectrum of existing string similarity join algorithms, classify them into different categories based on their main techniques, and compare them through extensive experiments on a variety of real-world datasets with different characteristics. We also report comprehensive findings obtained from the experiments and provide new insights about the strengths and weaknesses of existing similarity join algorithms which can guide practitioners to select appropriate algorithms for various scenarios.",
"title": ""
},
{
"docid": "edd39b11eaed2dc89ab74542ce9660bb",
"text": "The volume of data is growing at an increasing rate. This growth is both in size and in connectivity, where connectivity refers to the increasing presence of relationships between data. Social networks such as Facebook and Twitter store and process petabytes of data each day. Graph databases have gained renewed interest in the last years, due to their applications in areas such as the Semantic Web and Social Network Analysis. Graph databases provide an effective and efficient solution to data storage and querying data in these scenarios, where data is rich in relationships. In this paper, it is analyzed the fundamental points of graph databases, showing their main characteristics and advantages. We study Neo4j, the top graph database software in the market and evaluate its performance using the Social Network Benchmark (SNB).",
"title": ""
},
{
"docid": "e74240aef79f42ac0345a2ae49ecde4a",
"text": "Existing automatic music generation approaches that feature deep learning can be broadly classified into two types: raw audio models and symbolic models. Symbolic models, which train and generate at the note level, are currently the more prevalent approach; these models can capture long-range dependencies of melodic structure, but fail to grasp the nuances and richness of raw audio generations. Raw audio models, such as DeepMind’s WaveNet, train directly on sampled audio waveforms, allowing them to produce realistic-sounding, albeit unstructured music. In this paper, we propose an automatic music generation methodology combining both of these approaches to create structured, realistic-sounding compositions. We consider a Long Short Term Memory network to learn the melodic structure of different styles of music, and then use the unique symbolic generations from this model as a conditioning input to a WaveNet-based raw audio generator, creating a model for automatic, novel music. We then evaluate this approach by showcasing results of this work.",
"title": ""
},
{
"docid": "77f5216ede8babf4fb3b2bcbfc9a3152",
"text": "Various aspects of the theory of random walks on graphs are surveyed. In particular, estimates on the important parameters of access time, commute time, cover time and mixing time are discussed. Connections with the eigenvalues of graphs and with electrical networks, and the use of these connections in the study of random walks is described. We also sketch recent algorithmic applications of random walks, in particular to the problem of sampling.",
"title": ""
},
{
"docid": "d0ebee0648beecbd00faaf67f76f256c",
"text": "Text mining is the use of automated methods for exploiting the enormous amount of knowledge available in the biomedical literature. There are at least as many motivations for doing text mining work as there are types of bioscientists. Model organism database curators have been heavy participants in the development of the field due to their need to process large numbers of publications in order to populate the many data fields for every gene in their species of interest. Bench scientists have built biomedical text mining applications to aid in the development of tools for interpreting the output of high-throughput assays and to improve searches of sequence databases (see [1] for a review). Bioscientists of every stripe have built applications to deal with the dual issues of the doubleexponential growth in the scientific literature over the past few years and of the unique issues in searching PubMed/ MEDLINE for genomics-related publications. A surprising phenomenon can be noted in the recent history of biomedical text mining: although several systems have been built and deployed in the past few years—Chilibot, Textpresso, and PreBIND (see Text S1 for these and most other citations), for example—the ones that are seeing high usage rates and are making productive contributions to the working lives of bioscientists have been built not by text mining specialists, but by bioscientists. We speculate on why this might be so below. Three basic types of approaches to text mining have been prevalent in the biomedical domain. Co-occurrence– based methods do no more than look for concepts that occur in the same unit of text—typically a sentence, but sometimes as large as an abstract—and posit a relationship between them. (See [2] for an early co-occurrence–based system.) For example, if such a system saw that BRCA1 and breast cancer occurred in the same sentence, it might assume a relationship between breast cancer and the BRCA1 gene. Some early biomedical text mining systems were co-occurrence–based, but such systems are highly error prone, and are not commonly built today. In fact, many text mining practitioners would not consider them to be text mining systems at all. Co-occurrence of concepts in a text is sometimes used as a simple baseline when evaluating more sophisticated systems; as such, they are nontrivial, since even a co-occurrence– based system must deal with variability in the ways that concepts are expressed in human-produced texts. For example, BRCA1 could be referred to by any of its alternate symbols—IRIS, PSCP, BRCAI, BRCC1, or RNF53 (or by any of their many spelling variants, which include BRCA1, BRCA-1, and BRCA 1)— or by any of the variants of its full name, viz. breast cancer 1, early onset (its official name per Entrez Gene and the Human Gene Nomenclature Committee), as breast cancer susceptibility gene 1, or as the latter’s variant breast cancer susceptibility gene-1. Similarly, breast cancer could be referred to as breast cancer, carcinoma of the breast, or mammary neoplasm. These variability issues challenge more sophisticated systems, as well; we discuss ways of coping with them in Text S1. Two more common (and more sophisticated) approaches to text mining exist: rule-based or knowledgebased approaches, and statistical or machine-learning-based approaches. The variety of types of rule-based systems is quite wide. In general, rulebased systems make use of some sort of knowledge. This might take the form of general knowledge about how language is structured, specific knowledge about how biologically relevant facts are stated in the biomedical literature, knowledge about the sets of things that bioscientists talk about and the kinds of relationships that they can have with one another, and the variant forms by which they might be mentioned in the literature, or any subset or combination of these. (See [3] for an early rule-based system, and [4] for a discussion of rule-based approaches to various biomedical text mining tasks.) At one end of the spectrum, a simple rule-based system might use hardcoded patterns—for example, ,gene. plays a role in ,disease. or ,disease. is associated with ,gene.—to find explicit statements about the classes of things in which the researcher is interested. At the other end of the spectrum, a rulebased system might use sophisticated linguistic and semantic analyses to recognize a wide range of possible ways of making assertions about those classes of things. It is worth noting that useful systems have been built using technologies at both ends of the spectrum, and at many points in between. In contrast, statistical or machine-learning–based systems operate by building classifiers that may operate on any level, from labelling part of speech to choosing syntactic parse trees to classifying full sentences or documents. (See [5] for an early learning-based system, and [4] for a discussion of learning-based approaches to various biomedical text mining tasks.) Rule-based and statistical systems each have their advantages and",
"title": ""
},
{
"docid": "86ba97e91a8c2bcb1015c25df7c782db",
"text": "After a knee joint surgery, due to severe pain and immobility of the patient, the tissue around the knee become harder and knee stiffness will occur, which may causes many problems such as scar tissue swelling, bleeding, and fibrosis. A CPM (Continuous Passive Motion) machine is an apparatus that is being used to patient recovery, retrieving moving abilities of the knee, and reducing tissue swelling, after the knee joint surgery. This device prevents frozen joint syndrome (adhesive capsulitis), joint stiffness, and articular cartilage destruction by stimulating joint tissues, and flowing synovial fluid and blood around the knee joint. In this study, a new, light, and portable CPM machine with an appropriate interface, is designed and manufactured. The knee joint can be rotated from the range of -15° to 120° with a pace of 0.1 degree/sec to 1 degree/sec by this machine. One of the most important advantages of this new machine is its own user-friendly interface. This apparatus is controlled via an Android-based application; therefore, the users can use this machine easily via their own smartphones without the necessity to an extra controlling device. Besides, because of its apt size, this machine is a portable device. Smooth movement without any vibration and adjusting capability for different anatomies are other merits of this new CPM machine.",
"title": ""
},
{
"docid": "be6e3666eba5752a59605a86e5bd932f",
"text": "Accurate knowledge on the absolute or true speed of a vehicle, if and when available, can be used to enhance advanced vehicle dynamics control systems such as anti-lock brake systems (ABS) and auto-traction systems (ATS) control schemes. Current conventional method uses wheel speed measurements to estimate the speed of the vehicle. As a result, indication of the vehicle speed becomes erroneous and, thus, unreliable when large slips occur between the wheels and terrain. This paper describes a fuzzy rule-based Kalman filtering technique which employs an additional accelerometer to complement the wheel-based speed sensor, and produce an accurate estimation of the true speed of a vehicle. We use the Kalman filters to deal with the noise and uncertainties in the speed and acceleration models, and fuzzy logic to tune the covariances and reset the initialization of the filter according to slip conditions detected and measurement-estimation condition. Experiments were conducted using an actual vehicle to verify the proposed strategy. Application of the fuzzy logic rule-based Kalman filter shows that accurate estimates of the absolute speed can be achieved euen under sagnapcant brakang skzd and traction slip conditions.",
"title": ""
},
{
"docid": "335e92a896c6cce646f3ae81c5d9a02c",
"text": "Vulnerabilities in web applications allow malicious users to obtain unrestricted access to private and confidential information. SQL injection attacks rank at the top of the list of threats directed at any database-driven application written for the Web. An attacker can take advantages of web application programming security flaws and pass unexpected malicious SQL statements through a web application for execution by the back-end database. This paper proposes a novel specification-based methodology for the detection of exploitations of SQL injection vulnerabilities. The new approach on the one hand utilizes specifications that define the intended syntactic structure of SQL queries that are produced and executed by the web application and on the other hand monitors the application for executing queries that are in violation of the specification.\n The three most important advantages of the new approach against existing analogous mechanisms are that, first, it prevents all forms of SQL injection attacks; second, its effectiveness is independent of any particular target system, application environment, or DBMS; and, third, there is no need to modify the source code of existing web applications to apply the new protection scheme to them.\n We developed a prototype SQL injection detection system (SQL-IDS) that implements the proposed algorithm. The system monitors Java-based applications and detects SQL injection attacks in real time. We report some preliminary experimental results over several SQL injection attacks that show that the proposed query-specific detection allows the system to perform focused analysis at negligible computational overhead without producing false positives or false negatives. Therefore, the new approach is very efficient in practice.",
"title": ""
},
{
"docid": "c1e3872540aa37d6253b3e52bb3551a9",
"text": "Human Activity recognition (HAR) is an important area of research in ubiquitous computing and Human Computer Interaction. To recognize activities using mobile or wearable sensor, data are collected using appropriate sensors, segmented, needed features extracted and activities categories using discriminative models (SVM, HMM, MLP etc.). Feature extraction is an important stage as it helps to reduce computation time and ensure enhanced recognition accuracy. Earlier researches have used statistical features which require domain expert and handcrafted features. However, the advent of deep learning that extracts salient features from raw sensor data and has provided high performance in computer vision, speech and image recognition. Based on the recent advances recorded in deep learning for human activity recognition, we briefly reviewed the different deep learning methods for human activities implemented recently and then propose a conceptual deep learning frameworks that can be used to extract global features that model the temporal dependencies using Gated Recurrent Units. The framework when implemented would comprise of seven convolutional layer, two Gated recurrent unit and Support Vector Machine (SVM) layer for classification of activity details. The proposed technique is still under development and will be evaluated with benchmarked datasets and compared with other baseline deep learning algorithms.",
"title": ""
},
{
"docid": "46e63e9f9dc006ad46e514adc26c12bd",
"text": "In computer vision, image datasets used for classification are naturally associated with multiple labels and comprised of multiple views, because each image may contain several objects (e.g., pedestrian, bicycle, and tree) and is properly characterized by multiple visual features (e.g., color, texture, and shape). Currently, available tools ignore either the label relationship or the view complementarily. Motivated by the success of the vector-valued function that constructs matrix-valued kernels to explore the multilabel structure in the output space, we introduce multiview vector-valued manifold regularization (MV3MR) to integrate multiple features. MV3MR exploits the complementary property of different features and discovers the intrinsic local geometry of the compact support shared by different features under the theme of manifold regularization. We conduct extensive experiments on two challenging, but popular, datasets, PASCAL VOC' 07 and MIR Flickr, and validate the effectiveness of the proposed MV3MR for image classification.",
"title": ""
}
] |
scidocsrr
|
62abf4c7902fc30c9910a6923efa26b4
|
SciHadoop: Array-based query processing in Hadoop
|
[
{
"docid": "bf56462f283d072c4157d5c5665eead3",
"text": "Various scientific computations have become so complex, and thus computation tools play an important role. In this paper, we explore the state-of-the-art framework providing high-level matrix computation primitives with MapReduce through the case study approach, and demonstrate these primitives with different computation engines to show the performance and scalability. We believe the opportunity for using MapReduce in scientific computation is even more promising than the success to date in the parallel systems literature.",
"title": ""
},
{
"docid": "c5e37e68f7a7ce4b547b10a1888cf36f",
"text": "SciDB [4, 3] is a new open-source data management system intended primarily for use in application domains that involve very large (petabyte) scale array data; for example, scientific applications such as astronomy, remote sensing and climate modeling, bio-science information management, risk management systems in financial applications, and the analysis of web log data. In this talk we will describe our set of motivating examples and use them to explain the features of SciDB. We then briefly give an overview of the project 'in flight', explaining our novel storage manager, array data model, query language, and extensibility frameworks.",
"title": ""
}
] |
[
{
"docid": "de22ed244b7b2c5fb9da0981ec1b9852",
"text": "Knowledge graph embedding aims to construct a low-dimensional and continuous space, which is able to describe the semantics of high-dimensional and sparse knowledge graphs. Among existing solutions, translation models have drawn much attention lately, which use a relation vector to translate the head entity vector, the result of which is close to the tail entity vector. Compared with classical embedding methods, translation models achieve the state-of-the-art performance; nonetheless, the rationale and mechanism behind them still aspire after understanding and investigation. In this connection, we quest into the essence of translation models, and present a generic model, namely, GTrans, to entail all the existing translation models. In GTrans, each entity is interpreted by a combination of two states—eigenstate and mimesis. Eigenstate represents the features that an entity intrinsically owns, and mimesis expresses the features that are affected by associated relations. The weighting of the two states can be tuned, and hence, dynamic and static weighting strategies are put forward to best describe entities in the problem domain. Besides, GTrans incorporates a dynamic relation space for each relation, which not only enables the flexibility of our model but also reduces the noise from other relation spaces. In experiments, we evaluate our proposed model with two benchmark tasks—triplets classification and link prediction. Experiment results witness significant and consistent performance gain that is offered by GTrans over existing alternatives.",
"title": ""
},
{
"docid": "8b6246f6e328bed5fa1064ee8478dda1",
"text": "This study examined the relationships between attachment styles, drama viewing, parasocial interaction, and romantic beliefs. A survey of students revealed that drama viewing was weakly but negatively related to romantic beliefs, controlling for parasocial interaction and attachment styles. Instead, parasocial interaction mediated the effect of drama viewing on romantic beliefs: Those who viewed television dramas more heavily reported higher levels of parasocial interaction, which lead to stronger romantic beliefs. Regarding the attachment styles, anxiety was positively related to drama viewing and parasocial interaction, while avoidance was negatively related to parasocial interaction and romantic beliefs. These findings indicate that attachment styles and parasocial interaction are important for the association of drama viewing and romantic beliefs.",
"title": ""
},
{
"docid": "34896de2fb07161ecfa442581c1c3260",
"text": "Predicting the location of a video based on its content is a very meaningful, yet very challenging problem. Most existing work has focused on developing representative visual features and then searching for visually nearest neighbors in the development set to achieve a prediction. Interestingly, the relationship between scenes has been overlooked in prior work. Two scenes that are visually different, but frequently co-occur in same location, should naturally be considered similar for the geotagging problem. To build upon the above ideas, we propose to model the geo-spatial distributions of scenes by Gaussian Mixture Models (GMMs) and measure the distribution similarity by the Jensen-Shannon divergence (JSD). Subsequently, we present the Spatial Relationship Model (SRM) for geotagging which integrates the geo-spatial relationship of scenes into a hierarchical framework. We segment the Earth's surface into multiple levels of grids and measure the likelihood of input videos with an adaptation to region granularities. We have evaluated our approach using the YFCC100M dataset in the context of the MediaEval 2014 placing task. The total set of 35,000 geotagged videos is further divided into a training set of 25,000 videos and a test set of 10,000 videos. Our experimental results demonstrate the effectiveness of our proposed framework, as our solution achieves good accuracy and outperforms existing visual approaches for video geotagging.",
"title": ""
},
{
"docid": "71cb9b8dd8a19d67d64cdd14d4f82a48",
"text": "This paper illustrates the development of the automatic measurement and control system (AMCS) based on control area network (CAN) bus for the engine electronic control unit (ECU). The system composes of the ECU, test ECU (TECU), simulation ECU (SIMU), usb-CAN communication module (CRU ) and AMCS software platform, which is applied for engine control, data collection, transmission, engine signals simulation and the data conversion from CAN to USB and master control simulation and measurement. The AMCS platform software is designed with VC++ 6.0.This system has been applied in the development of engine ECU widely.",
"title": ""
},
{
"docid": "d93eb3d262bfe5dee777d27d21407cc6",
"text": "Economics can be distinguished from other social sciences by the belief that most (all?) behavior can be explained by assuming that agents have stable, well-defined preferences and make rational choices consistent with those preferences in markets that (eventually) clear. An empirical result qualifies as an anomaly if it is difficult to \"rationalize,\" or if implausible assumptions are necessary to explain it within the paradigm. This column presents a series of such anomalies. Readers are invited to suggest topics for future columns by sending a note with some reference to (or better yet copies oO the relevant research. Comments on anomalies printed here are also welcome. The address is: Richard Thaler, c/o Journal of Economic Perspectives, Johnson Graduate School of Management, Malott Hall, Cornell University, Ithaca, NY 14853. After this issue, the \"Anomalies\" column will no longer appear in every issue and instead will appear occasionally, when a pressing anomaly crosses Dick Thaler's desk. However, suggestions for new columns and comments on old ones are still welcome. Thaler would like to quash one rumor before it gets started, namely that he is cutting back because he has run out of anomalies. Au contraire, it is the dilemma of choosing which juicy anomaly to discuss that lakes so much time.",
"title": ""
},
{
"docid": "b2e5a2395641c004bdc84964d2528b13",
"text": "We propose a novel probabilistic model for visual question answering (Visual QA). The key idea is to infer two sets of embeddings: one for the image and the question jointly and the other for the answers. The learning objective is to learn the best parameterization of those embeddings such that the correct answer has higher likelihood among all possible answers. In contrast to several existing approaches of treating Visual QA as multi-way classification, the proposed approach takes the semantic relationships (as characterized by the embeddings) among answers into consideration, instead of viewing them as independent ordinal numbers. Thus, the learned embedded function can be used to embed unseen answers (in the training dataset). These properties make the approach particularly appealing for transfer learning for open-ended Visual QA, where the source dataset on which the model is learned has limited overlapping with the target dataset in the space of answers. We have also developed large-scale optimization techniques for applying the model to datasets with a large number of answers, where the challenge is to properly normalize the proposed probabilistic models. We validate our approach on several Visual QA datasets and investigate its utility for transferring models across datasets. The empirical results have shown that the approach performs well not only on in-domain learning but also on transfer learning.",
"title": ""
},
{
"docid": "94d40100337b2f6721cdc909513e028d",
"text": "Data deduplication systems detect redundancies between data blocks to either reduce storage needs or to reduce network traffic. A class of deduplication systems splits the data stream into data blocks (chunks) and then finds exact duplicates of these blocks.\n This paper compares the influence of different chunking approaches on multiple levels. On a macroscopic level, we compare the chunking approaches based on real-life user data in a weekly full backup scenario, both at a single point in time as well as over several weeks.\n In addition, we analyze how small changes affect the deduplication ratio for different file types on a microscopic level for chunking approaches and delta encoding. An intuitive assumption is that small semantic changes on documents cause only small modifications in the binary representation of files, which would imply a high ratio of deduplication. We will show that this assumption is not valid for many important file types and that application-specific chunking can help to further decrease storage capacity demands.",
"title": ""
},
{
"docid": "4a3f7e89874c76f62aa97ef6a114d574",
"text": "A robust approach to solving linear optimization problems with uncertain data was proposed in the early 1970s and has recently been extensively studied and extended. Under this approach, we are willing to accept a suboptimal solution for the nominal values of the data in order to ensure that the solution remains feasible and near optimal when the data changes. A concern with such an approach is that it might be too conservative. In this paper, we propose an approach that attempts to make this trade-off more attractive; that is, we investigate ways to decrease what we call the price of robustness. In particular, we flexibly adjust the level of conservatism of the robust solutions in terms of probabilistic bounds of constraint violations. An attractive aspect of our method is that the new robust formulation is also a linear optimization problem. Thus we naturally extend our methods to discrete optimization problems in a tractable way. We report numerical results for a portfolio optimization problem, a knapsack problem, and a problem from the Net Lib library.",
"title": ""
},
{
"docid": "3785c5a330da1324834590c8ed92f743",
"text": "We address the problem of joint detection and segmentation of multiple object instances in an image, a key step towards scene understanding. Inspired by data-driven methods, we propose an exemplar-based approach to the task of instance segmentation, in which a set of reference image/shape masks is used to find multiple objects. We design a novel CRF framework that jointly models object appearance, shape deformation, and object occlusion. To tackle the challenging MAP inference problem, we derive an alternating procedure that interleaves object segmentation and shape/appearance adaptation. We evaluate our method on two datasets with instance labels and show promising results.",
"title": ""
},
{
"docid": "8c2e69380cebdd6affd43c6bfed2fc51",
"text": "A fundamental property of many plasma-membrane proteins is their association with the underlying cytoskeleton to determine cell shape, and to participate in adhesion, motility and other plasma-membrane processes, including endocytosis and exocytosis. The ezrin–radixin–moesin (ERM) proteins are crucial components that provide a regulated linkage between membrane proteins and the cortical cytoskeleton, and also participate in signal-transduction pathways. The closely related tumour suppressor merlin shares many properties with ERM proteins, yet also provides a distinct and essential function.",
"title": ""
},
{
"docid": "e84699f276c807eb7fddb49d61bd8ae8",
"text": "Cyberbotics Ltd. develops Webots, a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. Webots lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. Webots has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd.",
"title": ""
},
{
"docid": "b29f2d688e541463b80006fac19eaf20",
"text": "Autonomous navigation has become an increasingly popular machine learning application. Recent advances in deep learning have also brought huge improvements to autonomous navigation. However, prior outdoor autonomous navigation methods depended on various expensive sensors or expensive and sometimes erroneously labeled real data. In this paper, we propose an autonomous navigation method that does not require expensive labeled real images and uses only a relatively inexpensive monocular camera. Our proposed method is based on (1) domain adaptation with an adversarial learning framework and (2) exploiting synthetic data from a simulator. To the best of the authors’ knowledge, this is the first work to apply domain adaptation with adversarial networks to autonomous navigation. We present empirical results on navigation in outdoor courses using an unmanned aerial vehicle. The performance of our method is comparable to that of a supervised model with labeled real data, although our method does not require any label information for the real data. Our proposal includes a theoretical analysis that supports the applicability of our approach.",
"title": ""
},
{
"docid": "129dd084e485da5885e2720a4bddd314",
"text": "In the present day developing houses, the procedures adopted during the development of software using agile methodologies are acknowledged as a better option than the procedures followed during conventional software development due to its innate characteristics such as iterative development, rapid delivery and reduced risk. Hence, it is desirable that the software development industries should have proper planning for estimating the effort required in agile software development. The existing techniques such as expert opinion, analogy and disaggregation are mostly observed to be ad hoc and in this manner inclined to be mistaken in a number of cases. One of the various approaches for calculating effort of agile projects in an empirical way is the story point approach (SPA). This paper presents a study on analysis of prediction accuracy of estimation process executed in order to improve it using SPA. Different machine learning techniques such as decision tree, stochastic gradient boosting and random forest are considered in order to assess prediction more qualitatively. A comparative analysis of these techniques with existing techniques is also presented and analyzed in order to critically examine their performance.",
"title": ""
},
{
"docid": "cc220d8ae1fa77b9e045022bef4a6621",
"text": "Cuneiform tablets appertain to the oldest textual artifacts and are in extent comparable to texts written in Latin or ancient Greek. The Cuneiform Commentaries Project (CPP) from Yale University provides tracings of cuneiform tablets with annotated transliterations and translations. As a part of our work analyzing cuneiform script computationally with 3D-acquisition and word-spotting, we present a first approach for automatized learning of transliterations of cuneiform tablets based on a corpus of parallel lines. These consist of manually drawn cuneiform characters and their transliteration into an alphanumeric code. Since the Cuneiform script is only available as raster-data, we segment lines with a projection profile, extract Histogram of oriented Gradients (HoG) features, detect outliers caused by tablet damage, and align those features with the transliteration. We apply methods from part-of-speech tagging to learn a correspondence between features and transliteration tokens. We evaluate point-wise classification with K-Nearest Neighbors (KNN) and a Support Vector Machine (SVM); sequence classification with a Hidden Markov Model (HMM) and a Structured Support Vector Machine (SVM-HMM). Analyzing our findings, we reach the conclusion that the sparsity of data, inconsistent labeling and the variety of tracing styles do currently not allow for fully automatized transliterations with the presented approach. However, the pursuit of automated learning of transliterations is of great relevance as manual annotation in larger quantities is not viable, given the few experts capable of transcribing cuneiform tablets.",
"title": ""
},
{
"docid": "5575b83290bec9b0c5da065f30762eb5",
"text": "The resource-based view can be positioned relative to at least three theoretical traditions: SCPbased theories of industry determinants of firm performance, neo-classical microeconomics, and evolutionary economics. In the 1991 article, only the first of these ways of positioning the resourcebased view is explored. This article briefly discusses some of the implications of positioning the resource-based view relative to these other two literatures; it also discusses some of the empirical implications of each of these different resource-based theories. © 2001 Elsevier Science Inc. All rights reserved.",
"title": ""
},
{
"docid": "5689528f407b9a32320e0f303f682b04",
"text": "Previous studies have shown that there is a non-trivial amount of duplication in source code. This paper analyzes a corpus of 4.5 million non-fork projects hosted on GitHub representing over 428 million files written in Java, C++, Python, and JavaScript. We found that this corpus has a mere 85 million unique files. In other words, 70% of the code on GitHub consists of clones of previously created files. There is considerable variation between language ecosystems. JavaScript has the highest rate of file duplication, only 6% of the files are distinct. Java, on the other hand, has the least duplication, 60% of files are distinct. Lastly, a project-level analysis shows that between 9% and 31% of the projects contain at least 80% of files that can be found elsewhere. These rates of duplication have implications for systems built on open source software as well as for researchers interested in analyzing large code bases. As a concrete artifact of this study, we have created DéjàVu, a publicly available map of code duplicates in GitHub repositories.",
"title": ""
},
{
"docid": "8e5c2bfb2ef611c94a02c2a214c4a968",
"text": "This paper defines and explores a somewhat different type of genetic algorithm (GA) a messy genetic algorithm (mGA). Messy GAs process variable-length strings that may be either underor overspecified with respect to the problem being solved . As nature has formed its genotypes by progressing from simple to more complex life forms, messy GAs solve problems by combining relatively short, well-tested building blocks to form longer, more complex strings that increasingly cover all features of a problem. This approach stands in contrast to the usual fixed-length, fixed-coding genetic algorithm, where the existence of the requisite tight linkage is taken for granted or ignored altogether. To compare the two approaches, a 3D-bit, orderthree-deceptive problem is searched using a simple GA and a messy GA. Using a random but fixed ordering of the bits, the simple GA makes errors at roughly three-quarters of its positions; under a worstcase ordering, the simple GA errs at all positions. In contrast to the simple GA results, the messy GA repeatedly solves the same problem to optimality. Prior to this time, no GA had ever solved a provably difficult problem to optimality without prior knowledge of good string arrangements. The mGA presented herein repeatedly achieves globally optimal results without such knowledge, and it does so at the very first generation in which strings are long enough to cover the problem. The solution of a difficult nonlinear problem to optimality suggests that messy GAs can solve more difficult problems than has been possible to date with other genetic algorithms. The ramifications of these techniques in search and machine learning are explored, including the possibility of messy floating-point codes, messy permutations, and messy classifiers. © 1989 Complex Systems Publications , Inc. 494 David E. Goldberg, Bradley Kotb, an d Kalyanmoy Deb",
"title": ""
},
{
"docid": "1a8d7f9766de483fc70bb98d2925f9b1",
"text": "Data mining applications are becoming a more common tool in understanding and solving educational and administrative problems in higher education. In general, research in educational mining focuses on modeling student's performance instead of instructors' performance. One of the common tools to evaluate instructors' performance is the course evaluation questionnaire to evaluate based on students' perception. In this paper, four different classification techniques - decision tree algorithms, support vector machines, artificial neural networks, and discriminant analysis - are used to build classifier models. Their performances are compared over a data set composed of responses of students to a real course evaluation questionnaire using accuracy, precision, recall, and specificity performance metrics. Although all the classifier models show comparably high classification performances, C5.0 classifier is the best with respect to accuracy, precision, and specificity. In addition, an analysis of the variable importance for each classifier model is done. Accordingly, it is shown that many of the questions in the course evaluation questionnaire appear to be irrelevant. Furthermore, the analysis shows that the instructors' success based on the students' perception mainly depends on the interest of the students in the course. The findings of this paper indicate the effectiveness and expressiveness of data mining models in course evaluation and higher education mining. Moreover, these findings may be used to improve the measurement instruments.",
"title": ""
},
{
"docid": "7db1b370d0e14e80343cbc7718bbb6c9",
"text": "T free-riding problem occurs if the presales activities needed to sell a product can be conducted separately from the actual sale of the product. Intuitively, free riding should hurt the retailer that provides that service, but the author shows analytically that free riding benefits not only the free-riding retailer, but also the retailer that provides the service when customers are heterogeneous in terms of their opportunity costs for shopping. The service-providing retailer has a postservice advantage, because customers who have resolved their matching uncertainty through sales service incur zero marginal shopping cost if they purchase from the service-providing retailer rather than the free-riding retailer. Moreover, allowing free riding gives the free rider less incentive to compete with the service provider on price, because many customers eventually will switch to it due to their own free riding. In turn, this induced soft strategic response enables the service provider to charge a higher price and enjoy the strictly positive profit that otherwise would have been wiped away by head-to-head price competition. Therefore, allowing free riding can be regarded as a necessary mechanism that prevents an aggressive response from another retailer and reduces the intensity of price competition.",
"title": ""
},
{
"docid": "994a674367471efecd38ac22b3c209fc",
"text": "In vehicular ad hoc networks (VANETs), trust establishment among vehicles is important to secure integrity and reliability of applications. In general, trust and reliability help vehicles to collect correct and credible information from surrounding vehicles. On top of that, a secure trust model can deal with uncertainties and risk taking from unreliable information in vehicular environments. However, inaccurate, incomplete, and imprecise information collected by vehicles as well as movable/immovable obstacles have interrupting effects on VANET. In this paper, a fuzzy trust model based on experience and plausibility is proposed to secure the vehicular network. The proposed trust model executes a series of security checks to ensure the correctness of the information received from authorized vehicles. Moreover, fog nodes are adopted as a facility to evaluate the level of accuracy of event’s location. The analyses show that the proposed solution not only detects malicious attackers and faulty nodes, but also overcomes the uncertainty and imprecision of data in vehicular networks in both line of sight and non-line of sight environments.",
"title": ""
}
] |
scidocsrr
|
5cf491216962a850c261749dc519155d
|
Deep Learning For Video Saliency Detection
|
[
{
"docid": "b716af4916ac0e4a0bf0b040dccd352b",
"text": "Modeling visual attention-particularly stimulus-driven, saliency-based attention-has been a very active research area over the past 25 years. Many different models of attention are now available which, aside from lending theoretical contributions to other fields, have demonstrated successful applications in computer vision, mobile robotics, and cognitive systems. Here we review, from a computational perspective, the basic concepts of attention implemented in these models. We present a taxonomy of nearly 65 models, which provides a critical comparison of approaches, their capabilities, and shortcomings. In particular, 13 criteria derived from behavioral and computational studies are formulated for qualitative comparison of attention models. Furthermore, we address several challenging issues with models, including biological plausibility of the computations, correlation with eye movement datasets, bottom-up and top-down dissociation, and constructing meaningful performance measures. Finally, we highlight current research trends in attention modeling and provide insights for future.",
"title": ""
},
{
"docid": "ed9d6571634f30797fb338a928cc8361",
"text": "In this paper, we study the challenging problem of tracking the trajectory of a moving object in a video with possibly very complex background. In contrast to most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning architectures, by putting more emphasis on the (unsupervised) feature learning problem. Specifically, by using auxiliary natural images, we train a stacked denoising autoencoder offline to learn generic image features that are more robust against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural network which is constructed from the encoder part of the trained autoencoder as a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the moving object. Comparison with the state-of-the-art trackers on some challenging benchmark video sequences shows that our deep learning tracker is more accurate while maintaining low computational cost with real-time performance when our MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU).",
"title": ""
}
] |
[
{
"docid": "6b79d1db9565fc7540d66ff8bf5aae1f",
"text": "Recognizing sarcasm often requires a deep understanding of multiple sources of information, including the utterance, the conversational context, and real world facts. Most of the current sarcasm detection systems consider only the utterance in isolation. There are some limited attempts toward taking into account the conversational context. In this paper, we propose an interpretable end-to-end model that combines information from both the utterance and the conversational context to detect sarcasm, and demonstrate its effectiveness through empirical evaluations. We also study the behavior of the proposed model to provide explanations for the model’s decisions. Importantly, our model is capable of determining the impact of utterance and conversational context on the model’s decisions. Finally, we provide an ablation study to illustrate the impact of different components of the proposed model.",
"title": ""
},
{
"docid": "904454a191da497071ee9b835561c6e6",
"text": "We introduce a stochastic discrete automaton model to simulate freeway traffic. Monte-Carlo simulations of the model show a transition from laminar traffic flow to start-stopwaves with increasing vehicle density, as is observed in real freeway traffic. For special cases analytical results can be obtained.",
"title": ""
},
{
"docid": "6669f61c302d79553a3e49a4f738c933",
"text": "Imagining urban space as being comfortable or fearful is studied as an effect of people’s connections to their residential area communication infrastructure. Geographic Information System (GIS) modeling and spatial-statistical methods are used to process 215 mental maps obtained from respondents to a multilingual survey of seven ethnically marked residential communities of Los Angeles. Spatial-statistical analyses reveal that fear perceptions of Los Angeles urban space are not associated with commonly expected causes of fear, such as high crime victimization likelihood. The main source of discomfort seems to be presence of non-White and non-Asian populations. Respondents more strongly connected to television and interpersonal communication channels are relatively more fearful of these populations than those less strongly connected. Theoretical, methodological, and community-building policy implications are discussed.",
"title": ""
},
{
"docid": "d922dbcdd2fb86e7582a4fb78990990e",
"text": "This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement. The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration estimation, as well as semantic labeling of the input point cloud. The initial estimation is then refined by directly fitting the body configuration with the observation (e.g., the input depth). In addition to the new system architecture, our other contributions include modifying a point cloud smoothing technique to deal with very noisy input depth maps, a point cloud alignment and pose search algorithm that is view-independent and efficient. Experiments on a public dataset show that our approach achieves significantly higher accuracy than previous state-of-art methods.",
"title": ""
},
{
"docid": "c2453816adf52157fca295274a4d8627",
"text": "Air quality monitoring is extremely important as air pollution has a direct impact on human health. Low-cost gas sensors are used to effectively perceive the environment by mounting them on top of mobile vehicles, for example, using a public transport network. Thus, these sensors are part of a mobile network and perform from time to time measurements in each others vicinity. In this paper, we study three calibration algorithms that exploit co-located sensor measurements to enhance sensor calibration and consequently the quality of the pollution measurements on-the-fly. Forward calibration, based on a traditional approach widely used in the literature, is used as performance benchmark for two novel algorithms: backward and instant calibration. We validate all three algorithms with real ozone pollution measurements carried out in an urban setting by comparing gas sensor output to high-quality measurements from analytical instruments. We find that both backward and instant calibration reduce the average measurement error by a factor of two compared to forward calibration. Furthermore, we unveil the arising difficulties if sensor calibration is not based on reliable reference measurements but on sensor readings of low-cost gas sensors which is inevitable in a mobile scenario with only a few reliable sensors. We propose a solution and evaluate its effect on the measurement accuracy in experiments and simulation.",
"title": ""
},
{
"docid": "c0ee7fef7f96db6908f49170c6c75b2c",
"text": "Improving Neural Networks with Dropout Nitish Srivastava Master of Science Graduate Department of Computer Science University of Toronto 2013 Deep neural nets with a huge number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from a neural network during training. This prevents the units from co-adapting too much. Dropping units creates thinned networks during training. The number of possible thinned networks is exponential in the number of units in the network. At test time all possible thinned networks are combined using an approximate model averaging procedure. Dropout training followed by this approximate model combination significantly reduces overfitting and gives major improvements over other regularization methods. In this work, we describe models that improve the performance of neural networks using dropout, often obtaining state-of-the-art results on benchmark datasets.",
"title": ""
},
{
"docid": "b73a9a7770a2bbd5edcc991d7b848371",
"text": "This paper overviews various switched flux permanent magnet machines and their design and performance features, with particular emphasis on machine topologies with reduced magnet usage or without using magnet, as well as with variable flux capability. In addition, this paper also describes their relationships with doubly-salient permanent magnet machines and flux reversal permanent magnet machines.",
"title": ""
},
{
"docid": "89526592b297342697c131daba388450",
"text": "Fundamental and advanced developments in neum-fuzzy synergisms for modeling and control are reviewed. The essential part of neuro-fuuy synergisms comes from a common framework called adaptive networks, which unifies both neural networks and fuzzy models. The f u u y models under the framework of adaptive networks is called Adaptive-Network-based Fuzzy Inference System (ANFIS), which possess certain advantages over neural networks. We introduce the design methods f o r ANFIS in both modeling and control applications. Current problems and future directions for neuro-fuzzy approaches are also addressed.",
"title": ""
},
{
"docid": "e36e0c8659b8bae3acf0f178fce362c3",
"text": "Clinical data describing the phenotypes and treatment of patients represents an underused data source that has much greater research potential than is currently realized. Mining of electronic health records (EHRs) has the potential for establishing new patient-stratification principles and for revealing unknown disease correlations. Integrating EHR data with genetic data will also give a finer understanding of genotype–phenotype relationships. However, a broad range of ethical, legal and technical reasons currently hinder the systematic deposition of these data in EHRs and their mining. Here, we consider the potential for furthering medical research and clinical care using EHR data and the challenges that must be overcome before this is a reality.",
"title": ""
},
{
"docid": "35812bda0819769efb1310d1f6d5defd",
"text": "Distributed Denial-of-Service (DDoS) attacks are increasing in frequency and volume on the Internet, and there is evidence that cyber-criminals are turning to Internet-of-Things (IoT) devices such as cameras and vending machines as easy launchpads for large-scale attacks. This paper quantifies the capability of consumer IoT devices to participate in reflective DDoS attacks. We first show that household devices can be exposed to Internet reflection even if they are secured behind home gateways. We then evaluate eight household devices available on the market today, including lightbulbs, webcams, and printers, and experimentally profile their reflective capability, amplification factor, duration, and intensity rate for TCP, SNMP, and SSDP based attacks. Lastly, we demonstrate reflection attacks in a real-world setting involving three IoT-equipped smart-homes, emphasising the imminent need to address this problem before it becomes widespread.",
"title": ""
},
{
"docid": "75952f3945628c15a66b7288e6c1d1a7",
"text": "Most of the samples discovered are variations of known malicious programs and thus have similar structures, however, there is no method of malware classification that is completely effective. To address this issue, the approach proposed in this paper represents a malware in terms of a vector, in which each feature consists of the amount of APIs called from a Dynamic Link Library (DLL). To determine if this approach is useful to classify malware variants into the correct families, we employ Euclidean Distance and a Multilayer Perceptron with several learning algorithms. The experimental results are analyzed to determine which method works best with the approach. The experiments were conducted with a database that contains real samples of worms and trojans and show that is possible to classify malware variants using the number of functions imported per library. However, the accuracy varies depending on the method used for the classification.",
"title": ""
},
{
"docid": "a2d7fc045b1c8706dbfe3772a8f6ef70",
"text": "This paper is concerned with the problem of domain adaptation with multiple sources from a causal point of view. In particular, we use causal models to represent the relationship between the features X and class label Y , and consider possible situations where different modules of the causal model change with the domain. In each situation, we investigate what knowledge is appropriate to transfer and find the optimal target-domain hypothesis. This gives an intuitive interpretation of the assumptions underlying certain previous methods and motivates new ones. We finally focus on the case where Y is the cause for X with changing PY and PX|Y , that is, PY and PX|Y change independently across domains. Under appropriate assumptions, the availability of multiple source domains allows a natural way to reconstruct the conditional distribution on the target domain; we propose to model PX|Y (the process to generate effect X from cause Y ) on the target domain as a linear mixture of those on source domains, and estimate all involved parameters by matching the target-domain feature distribution. Experimental results on both synthetic and real-world data verify our theoretical results. Traditional machine learning relies on the assumption that both training and test data are from the same distribution. In practice, however, training and test data are probably sampled under different conditions, thus violating this assumption, and the problem of domain adaptation (DA) arises. Consider remote sensing image classification as an example. Suppose we already have several data sets on which the class labels are known; they are called source domains here. For a new data set, or a target domain, it is usually difficult to find the ground truth reference labels, and we aim to determine the labels by making use of the information from the source domains. Note that those domains are usually obtained in different areas and time periods, and that the corresponding data distribution various due to the change in illumination conditions, physical factors related to ground (e.g., different soil moisture or composition), vegetation, and atmospheric conditions. Other well-known instances of this situation include sentiment data analysis (Blitzer, Dredze, and Pereira 2007) and flow cytometry data analysis (Blanchard, Lee, and Scott 2011). DA approaches have Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. many applications in varies areas including natural language processing, computer vision, and biology. For surveys on DA, see, e.g., (Jiang 2008; Pan and Yang 2010; Candela et al. 2009). In this paper, we consider the situation with n source domains on which both the features X and label Y are given, i.e., we are given (x,y) = (x k , y (i) k ) mi k=1, where i = 1, ..., n, and mi is the sample size of the ith source domain. Our goal is to find the classifier for the target domain, on which only the features x = (xk) m k=1 are available. Here we are concerned with a difficult scenario where no labeled point is available in the target domain, known as unsupervised domain adaptation. Since PXY changes across domains, we have to find what knowledge in the source domains should be transferred to the target one. Previous work in domain adaptation has usually assumed that PX changes but PY |X remain the same, i.e., the covariate shift situation; see, e.g., (Shimodaira 2000; Huang et al. 2007; Sugiyama et al. 2008; Ben-David, Shalev-Shwartz, and Urner 2012). It is also known as sample selection bias (particularly on the features X) in (Zadrozny 2004). In practice it is very often that both PX and PY |X change simultaneously across domains. For instance, both of them are likely to change over time and location for a satellite image classification system. If the data distribution changes arbitrarily across domains, clearly knowledge from the sources may not help in predicting Y on the target domain (Rosenstein et al. 2005). One has to find what type of information should be transferred from sources to the target. One possibility is to assume the change in both PX and PY |X is due to the change in PY , while PX|Y remains the same, as known as prior probability shift (Storkey 2009; Plessis and Sugiyama 2012) or target shift (Zhang et al. 2013). The latter further models the change in PX|Y caused by a location-scale (LS) transformation of the features for each class. The constraint of the LS transformation renders PX|Y on the target domain, denoted by P t X|Y , identifiable; however, it might be too restrictive. Fortunately, the availability of multiple source domains provides more hints as to find P t X|Y , as well as P t Y |X . Several algorithms have been proposed to combine knowledge from multiple source domains. For instance, (Mansour, Mohri, and Rostamizadeh 2008) proposed to form the target hypothesis by combining source hypotheses with a distribution weighted rule. (Gao et al. 2008), (Duan et al. 2009), and (Chattopadhyay et al. 2011) combine the predictions made by the source hypotheses, with the weights determined in different ways. An intuitive interpretation of the assumptions underlying those algorithms would facilitate choosing or developing DA methods for the problem at hand. To the best of our knowledge, however, it is still missing in the literature. One of our contributions in this paper is to provide such an interpretation. This paper studies the multi-source DA problem from a causal point of view where we consider the underlying data generating process behind the observed domains. We are particularly interested in what types of information stay the same, what types of information change, and how they change across domains. This enables us to construct the optimal hypothesis for the target domain in various situations. To this end, we use causal models to represent the relationship between X and Y , because they provide a compact description of the properties of the change in the data distribution.1 They, for instance, help characterize transportability of experimental findings (Pearl and Bareinboim 2011) or recoverability from selection bias (Bareinboim, Tian, and Pearl 2014). As another contribution, we further focus on a typical DA scenario where both PY and PX|Y (or the causal mechanism to generate effect X from cause Y ) change across domains, but their changes are independent from each other, as implied by the causal model Y → X . We assume that the source domains contains rich information such that for each class, P t X|Y can be approximated by a linear mixture of PX|Y on source domains. Together with other mild conditions on PX|Y , we then show that P t X|Y , as well as P t Y , is identifiable (or can be uniquely recovered). We present a computationally efficient method to estimate the involved parameters based on kernel mean distribution embedding (Smola et al. 2007; Gretton et al. 2007), followed by several approaches to constructing the target classifier using those parameters. One might wonder how to find the causal information underlying the data to facilitate domain adaptation. We note that in practice, background causal knowledge is usually available, helping formulating how to transfer the knowledge from source domains to the target. Even if this is not the case, multiple source domains with different data distributions may allow one to identify the causal structure, since the causal knowledge can be seen from the change in data distributions; see e.g., (Tian and Pearl 2001). 1 Possible DA Situations and Their Solutions DA can be considered as a learning problem in nonstationary environments (Sugiyama and Kawanabe 2012). It is helpful to find how the data distribution changes; it provides the clues as to find the learning machine for the target domain. The causal model also describes how the components of the joint distribution are related to each other, which, for instance, gives a causal explanation of the behavior of semi-supervised learning (Schölkopf et al. 2012). Table 1: Notation used in this paper. X , Y random variables X , Y domains",
"title": ""
},
{
"docid": "bd590555337d3ada2c641c5f1918cf2c",
"text": "Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today’s most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learning algorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation. The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing.",
"title": ""
},
{
"docid": "9ff6d7a36646b2f9170bd46d14e25093",
"text": "Psychedelic drugs such as LSD and psilocybin are often claimed to be capable of inducing life-changing experiences described as mystical or transcendental, especially if high doses are taken. The present study examined possible enduring effects of such experiences by comparing users of psychedelic drugs (n = 88), users of nonpsychedelic illegal drugs (e.g., marijuana, amphetamines) (n = 29) and non illicit drug-using social drinkers (n = 66) on questionnaire measures of values, beliefs and emotional empathy. Samples were obtained from Israel (n = 110) and Australia (n = 73) in a cross-cultural comparison to see if values associated with psychedelic drug use transcended culture of origin. Psychedelic users scored significantly higher on mystical beliefs (e.g., oneness with God and the universe) and life values of spirituality and concern for others than the other groups, and lower on the value of financial prosperity, irrespective of culture of origin. Users of nonpsychedelic illegal drugs scored significantly lower on a measure of coping ability than both psychedelic users and non illicit drug users. Both groups of illegal drug users scored significantly higher on empathy than non illicit drug users. Results are discussed in the context of earlier findings from Pahnke (1966) and Doblin (1991) of the transformative effect of psychedelic experiences, although the possibility remains that present findings reflect predrug characteristics of those who chose to take psychedelic drugs rather than effects of the drugs themselves.",
"title": ""
},
{
"docid": "d24331326c59911f9c1cdc5dd5f14845",
"text": "A novel topology for a soft-switching buck dc– dc converter with a coupled inductor is proposed. The soft-switching buck converter has advantages over the traditional hardswitching converters. The most significant advantage is that it offers a lower switching loss. This converter operates under a zero-current switching condition at turn on and a zero-voltage switching condition at turn off. It presents the circuit configuration with a least components for realizing soft switching. Because of soft switching, the proposed converter can attain a high efficiency under heavy load conditions. Likewise, a high efficiency is also attained under light load conditions, which is significantly different from other soft switching buck converters",
"title": ""
},
{
"docid": "30de4ba4607cfcc106361fa45b89a628",
"text": "The purpose of this study is to characterize and understand the long-term behavior of the output from megavoltage radiotherapy linear accelerators. Output trends of nine beams from three linear accelerators over a period of more than three years are reported and analyzed. Output, taken during daily warm-up, forms the basis of this study. The output is measured using devices having ion chambers. These are not calibrated by accredited dosimetry laboratory, but are baseline-compared against monthly output which is measured using calibrated ion chambers. We consider the output from the daily check devices as it is, and sometimes normalized it by the actual output measured during the monthly calibration of the linacs. The data show noisy quasi-periodic behavior. The output variation, if normalized by monthly measured \"real' output, is bounded between ± 3%. Beams of different energies from the same linac are correlated with a correlation coefficient as high as 0.97, for one particular linac, and as low as 0.44 for another. These maximum and minimum correlations drop to 0.78 and 0.25 when daily output is normalized by the monthly measurements. These results suggest that the origin of these correlations is both the linacs and the daily output check devices. Beams from different linacs, independent of their energies, have lower correlation coefficient, with a maximum of about 0.50 and a minimum of almost zero. The maximum correlation drops to almost zero if the output is normalized by the monthly measured output. Some scatter plots of pairs of beam output from the same linac show band-like structures. These structures are blurred when the output is normalized by the monthly calibrated output. Fourier decomposition of the quasi-periodic output is consistent with a 1/f power law. The output variation appears to come from a distorted normal distribution with a mean of slightly greater than unity. The quasi-periodic behavior is manifested in the seasonally averaged output, showing annual variability with negative variations in the winter and positive in the summer. This trend is weakened when the daily output is normalized by the monthly calibrated output, indicating that the variation of the periodic component may be intrinsic to both the linacs and the daily measurement devices. Actual linac output was measured monthly. It needs to be adjusted once every three to six months for our tolerance and action levels. If these adjustments are artificially removed, then there is an increase in output of about 2%-4% per year.",
"title": ""
},
{
"docid": "ed0444685c9a629c7d1fda7c4912fd55",
"text": "Citrus fruits have potential health-promoting properties and their essential oils have long been used in several applications. Due to biological effects described to some citrus species in this study our objectives were to analyze and compare the phytochemical composition and evaluate the anti-inflammatory effect of essential oils (EO) obtained from four different Citrus species. Mice were treated with EO obtained from C. limon, C. latifolia, C. aurantifolia or C. limonia (10 to 100 mg/kg, p.o.) and their anti-inflammatory effects were evaluated in chemical induced inflammation (formalin-induced licking response) and carrageenan-induced inflammation in the subcutaneous air pouch model. A possible antinociceptive effect was evaluated in the hot plate model. Phytochemical analyses indicated the presence of geranial, limonene, γ-terpinene and others. EOs from C. limon, C. aurantifolia and C. limonia exhibited anti-inflammatory effects by reducing cell migration, cytokine production and protein extravasation induced by carrageenan. These effects were also obtained with similar amounts of pure limonene. It was also observed that C. aurantifolia induced myelotoxicity in mice. Anti-inflammatory effect of C. limon and C. limonia is probably due to their large quantities of limonene, while the myelotoxicity observed with C. aurantifolia is most likely due to the high concentration of citral. Our results indicate that these EOs from C. limon, C. aurantifolia and C. limonia have a significant anti-inflammatory effect; however, care should be taken with C. aurantifolia.",
"title": ""
},
{
"docid": "adb2d3d17599c0aa36236f923adc934f",
"text": "This paper reports, to our knowledge, the first spherical induction motor (SIM) operating with closed loop control. The motor can produce up to 4 Nm of torque along arbitrary axes with continuous speeds up to 300 rpm. The motor's rotor is a two-layer copper-over-iron spherical shell. The stator has four independent inductors that generate thrust forces on the rotor surface. The motor is also equipped with four optical mouse sensors that measure surface velocity to estimate the rotor's angular velocity, which is used for vector control of the inductors and control of angular velocity and orientation. Design considerations including torque distribution for the inductors, angular velocity sensing, angular velocity control, and orientation control are presented. Experimental results show accurate tracking of velocity and orientation commands.",
"title": ""
},
{
"docid": "40083241b498dc6ac14de7dcc0b38399",
"text": "We report on an automated runtime anomaly detection method at the application layer of multi-node computer systems. Although several network management systems are available in the market, none of them have sufficient capabilities to detect faults in multi-tier Web-based systems with redundancy. We model a Web-based system as a weighted graph, where each node represents a \"service\" and each edge represents a dependency between services. Since the edge weights vary greatly over time, the problem we address is that of anomaly detection from a time sequence of graphs.In our method, we first extract a feature vector from the adjacency matrix that represents the activities of all of the services. The heart of our method is to use the principal eigenvector of the eigenclusters of the graph. Then we derive a probability distribution for an anomaly measure defined for a time-series of directional data derived from the graph sequence. Given a critical probability, the threshold value is adaptively updated using a novel online algorithm.We demonstrate that a fault in a Web application can be automatically detected and the faulty services are identified without using detailed knowledge of the behavior of the system.",
"title": ""
},
{
"docid": "aa3e8c4e4695d8c372987c8e409eb32f",
"text": "We present a novel sketch-based system for the interactive modeling of a variety of free-form 3D objects using just a few strokes. Our technique is inspired by the traditional illustration strategy for depicting 3D forms where the basic geometric forms of the subjects are identified, sketched and progressively refined using few key strokes. We introduce two parametric surfaces, rotational and cross sectional blending, that are inspired by this illustration technique. We also describe orthogonal deformation and cross sectional oversketching as editing tools to complement our modeling techniques. Examples with models ranging from cartoon style to botanical illustration demonstrate the capabilities of our system.",
"title": ""
}
] |
scidocsrr
|
4f6fbc474010df5403b47dcc2023ea7c
|
Security Improvement for Management Frames in IEEE 802.11 Wireless Networks
|
[
{
"docid": "8dcb99721a06752168075e6d45ee64c7",
"text": "The convenience of 802.11-based wireless access networks has led to widespread deployment in the consumer, industrial and military sectors. However, this use is predicated on an implicit assumption of confidentiality and availability. While the secu rity flaws in 802.11’s basic confidentially mechanisms have been widely publicized, the threats to network availability are far less widely appreciated. In fact, it has been suggested that 802.11 is highly suscepti ble to malicious denial-of-service (DoS) attacks tar geting its management and media access protocols. This paper provides an experimental analysis of such 802.11-specific attacks – their practicality, their ef ficacy and potential low-overhead implementation changes to mitigate the underlying vulnerabilities.",
"title": ""
}
] |
[
{
"docid": "82e490a288a5c8d94a749dc9b4ac3073",
"text": "We consider the problem of automatically estimating the 3D pose of humans from images, taken from multiple calibrated views. We show that it is possible and tractable to extend the pictorial structures framework, popular for 2D pose estimation, to 3D. We discuss how to use this framework to impose view, skeleton, joint angle and intersection constraints in 3D. The 3D pictorial structures are evaluated on multiple view data from a professional football game. The evaluation is focused on computational tractability, but we also demonstrate how a simple 2D part detector can be plugged into the framework.",
"title": ""
},
{
"docid": "466e715ad00c023ab27871d6b4e98104",
"text": "Cheap and versatile cameras make it possible to easily and quickly capture a wide variety of documents. However, low resolution cameras present a challenge to OCR because it is virtually impossible to do character segmentation independently from recognition. In this paper we solve these problems simultaneously by applying methods borrowed from cursive handwriting recognition. To achieve maximum robustness, we use a machine learning approach based on a convolutional neural network. When our system is combined with a language model using dynamic programming, the overall performance is in the vicinity of 80-95% word accuracy on pages captured with a 1024/spl times/768 webcam and 10-point text.",
"title": ""
},
{
"docid": "78fc46165449f94e75e70a2654abf518",
"text": "This paper presents a non-photorealistic rendering technique that automatically generates a line drawing from a photograph. We aim at extracting a set of coherent, smooth, and stylistic lines that effectively capture and convey important shapes in the image. We first develop a novel method for constructing a smooth direction field that preserves the flow of the salient image features. We then introduce the notion of flow-guided anisotropic filtering for detecting highly coherent lines while suppressing noise. Our method is simple and easy to implement. A variety of experimental results are presented to show the effectiveness of our method in producing self-contained, high-quality line illustrations.",
"title": ""
},
{
"docid": "03fbec34e163eb3c53b46ae49b4a3d49",
"text": "We consider the use of multi-agent systems to control network routing. Conventional approaches to this task are based on Ideal Shortest Path routing Algorithm (ISPA), under which at each moment each agent in the network sends all of its traffic down the path that will incur the lowest cost to that traffic. We demonstrate in computer experiments that due to the side-effects of one agent’s actions on another agent’s traffic, use of ISPA’s can result in large global cost. In particular, in a simulation of Braess’ paradox we see that adding new capacity to a network with ISPA agents can decrease overall throughput. The theory of COllective INtelligence (COIN) design concerns precisely the issue of avoiding such side-effects. We use that theory to derive an idealized routing algorithm and show that a practical machine-learning-based version of this algorithm, in which costs are only imprecisely estimated substantially outperforms the ISPA, despite having access to less information than does the ISPA. In particular, this practical COIN algorithm avoids Braess’ paradox.",
"title": ""
},
{
"docid": "5f5dd4b8476efa2d1907bf0185b054ab",
"text": "BACKGROUND\nAn accurate risk stratification tool is critical in identifying patients who are at high risk of frequent hospital readmissions. While 30-day hospital readmissions have been widely studied, there is increasing interest in identifying potential high-cost users or frequent hospital admitters. In this study, we aimed to derive and validate a risk stratification tool to predict frequent hospital admitters.\n\n\nMETHODS\nWe conducted a retrospective cohort study using the readily available clinical and administrative data from the electronic health records of a tertiary hospital in Singapore. The primary outcome was chosen as three or more inpatient readmissions within 12 months of index discharge. We used univariable and multivariable logistic regression models to build a frequent hospital admission risk score (FAM-FACE-SG) by incorporating demographics, indicators of socioeconomic status, prior healthcare utilization, markers of acute illness burden and markers of chronic illness burden. We further validated the risk score on a separate dataset and compared its performance with the LACE index using the receiver operating characteristic analysis.\n\n\nRESULTS\nOur study included 25,244 patients, with 70% randomly selected patients for risk score derivation and the remaining 30% for validation. Overall, 4,322 patients (17.1%) met the outcome. The final FAM-FACE-SG score consisted of nine components: Furosemide (Intravenous 40 mg and above during index admission); Admissions in past one year; Medifund (Required financial assistance); Frequent emergency department (ED) use (≥3 ED visits in 6 month before index admission); Anti-depressants in past one year; Charlson comorbidity index; End Stage Renal Failure on Dialysis; Subsidized ward stay; and Geriatric patient or not. In the experiments, the FAM-FACE-SG score had good discriminative ability with an area under the curve (AUC) of 0.839 (95% confidence interval [CI]: 0.825-0.853) for risk prediction of frequent hospital admission. In comparison, the LACE index only achieved an AUC of 0.761 (0.745-0.777).\n\n\nCONCLUSIONS\nThe FAM-FACE-SG score shows strong potential for implementation to provide near real-time prediction of frequent admissions. It may serve as the first step to identify high risk patients to receive resource intensive interventions.",
"title": ""
},
{
"docid": "ce5063166a5c95e6c3bd75471e827eb0",
"text": "Arithmetic coding is a data compression technique that encodes data (the data string) by creating a code string which represents a fractional value on the number line between 0 and 1. The coding algorithm is symbolwise recursive; i.e., it operates upon and encodes (decodes) one data symbol per iteration or recursion. On each recursion, the algorithm successively partitions an interval of the number line between 0 and I , and retains one of the partitions as the new interval. Thus, the algorithm successively deals with smaller intervals, and the code string, viewed as a magnitude, lies in each of the nested intervals. The data string is recovered by using magnitude comparisons on the code string to recreate how the encoder must have successively partitioned and retained each nested subinterval. Arithmetic coding differs considerably from the more familiar compression coding techniques, such as prefix (Huffman) codes. Also, it should not be confused with error control coding, whose object is to detect and correct errors in computer operations. This paper presents the key notions of arithmetic compression coding by means of simple examples.",
"title": ""
},
{
"docid": "2a187505ea098a45b5a4da4f4a32e049",
"text": "Combining information extraction systems yields significantly higher quality resources than each system in isolation. In this paper, we generalize such a mixing of sources and features in a framework called Ensemble Semantics. We show very large gains in entity extraction by combining state-of-the-art distributional and patternbased systems with a large set of features from a webcrawl, query logs, and Wikipedia. Experimental results on a webscale extraction of actors, athletes and musicians show significantly higher mean average precision scores (29% gain) compared with the current state of the art.",
"title": ""
},
{
"docid": "470265e6acd60a190401936fb7121c75",
"text": "Synesthesia is a conscious experience of systematically induced sensory attributes that are not experienced by most people under comparable conditions. Recent findings from cognitive psychology, functional brain imaging and electrophysiology have shed considerable light on the nature of synesthesia and its neurocognitive underpinnings. These cognitive and physiological findings are discussed with respect to a neuroanatomical framework comprising hierarchically organized cortical sensory pathways. We advance a neurobiological theory of synesthesia that fits within this neuroanatomical framework.",
"title": ""
},
{
"docid": "54ae3c09c95cbcf93a0005c8c06b671a",
"text": "Our main objective in this paper is to measure the value of customers acquired from Google search advertising accounting for two factors that have been overlooked in the conventional method widely adopted in the industry: the spillover effect of search advertising on customer acquisition and sales in offline channels and the lifetime value of acquired customers. By merging web traffic and sales data from a small-sized U.S. firm, we create an individual customer level panel which tracks all repeated purchases, both online and offline, and whether or not these purchases were referred from Google search advertising. To estimate the customer lifetime value, we apply the methodology in the CRM literature by developing an integrated model of customer lifetime, transaction rate, and gross profit margin, allowing for individual heterogeneity and a full correlation of the three processes. Results show that customers acquired through Google search advertising in our data have a higher transaction rate than customers acquired from other channels. After accounting for future purchases and spillover to offline channels, the calculated value of new customers is much higher than that when we use the conventional method. The approach used in our study provides a practical framework for firms to evaluate the long term profit impact of their search advertising investment in a multichannel setting.",
"title": ""
},
{
"docid": "fccd5b6a22d566fa52f3a331398c9dde",
"text": "The rapid pace of technological advances in recentyears has enabled a significant evolution and deployment ofWireless Sensor Networks (WSN). These networks are a keyplayer in the so-called Internet of Things, generating exponentiallyincreasing amounts of data. Nonetheless, there are veryfew documented works that tackle the challenges related with thecollection, manipulation and exploitation of the data generated bythese networks. This paper presents a proposal for integrating BigData tools (in rest and in motion) for the gathering, storage andanalysis of data generated by a WSN that monitors air pollutionlevels in a city. We provide a proof of concept that combinesHadoop and Storm for data processing, storage and analysis,and Arduino-based kits for constructing our sensor prototypes.",
"title": ""
},
{
"docid": "114b9bd9f99a18f6c963c6bbf1fb6e1b",
"text": "A self-rating scale was developed to measure the severity of fatigue. Two-hundred and seventy-four new registrations on a general practice list completed a 14-item fatigue scale. In addition, 100 consecutive attenders to a general practice completed the fatigue scale and the fatigue item of the revised Clinical Interview Schedule (CIS-R). These were compared by the application of Relative Operating Characteristic (ROC) analysis. Tests of internal consistency and principal components analyses were performed on both sets of data. The scale was found to be both reliable and valid. There was a high degree of internal consistency, and the principal components analysis supported the notion of a two-factor solution (physical and mental fatigue). The validation coefficients for the fatigue scale, using an arbitrary cut off score of 3/4 and the item on the CIS-R were: sensitivity 75.5 and specificity 74.5.",
"title": ""
},
{
"docid": "a6e6cf1473adb05f33b55cb57d6ed6d3",
"text": "In machine learning, data augmentation is the process of creating synthetic examples in order to augment a dataset used to learn a model. One motivation for data augmentation is to reduce the variance of a classifier, thereby reducing error. In this paper, we propose new data augmentation techniques specifically designed for time series classification, where the space in which they are embedded is induced by Dynamic Time Warping (DTW). The main idea of our approach is to average a set of time series and use the average time series as a new synthetic example. The proposed methods rely on an extension of DTW Barycentric Averaging (DBA), the averaging technique that is specifically developed for DTW. In this paper, we extend DBA to be able to calculate a weighted average of time series under DTW. In this case, instead of each time series contributing equally to the final average, some can contribute more than others. This extension allows us to generate an infinite number of new examples from any set of given time series. To this end, we propose three methods that choose the weights associated to the time series of the dataset. We carry out experiments on the 85 datasets of the UCR archive and demonstrate that our method is particularly useful when the number of available examples is limited (e.g. 2 to 6 examples per class) using a 1-NN DTW classifier. Furthermore, we show that augmenting full datasets is beneficial in most cases, as we observed an increase of accuracy on 56 datasets, no effect on 7 and a slight decrease on only 22.",
"title": ""
},
{
"docid": "ee2c37fd2ebc3fd783bfe53213e7470e",
"text": "Mind-body interventions are beneficial in stress-related mental and physical disorders. Current research is finding associations between emotional disorders and vagal tone as indicated by heart rate variability. A neurophysiologic model of yogic breathing proposes to integrate research on yoga with polyvagal theory, vagal stimulation, hyperventilation, and clinical observations. Yogic breathing is a unique method for balancing the autonomic nervous system and influencing psychologic and stress-related disorders. Many studies demonstrate effects of yogic breathing on brain function and physiologic parameters, but the mechanisms have not been clarified. Sudarshan Kriya yoga (SKY), a sequence of specific breathing techniques (ujjayi, bhastrika, and Sudarshan Kriya) can alleviate anxiety, depression, everyday stress, post-traumatic stress, and stress-related medical illnesses. Mechanisms contributing to a state of calm alertness include increased parasympathetic drive, calming of stress response systems, neuroendocrine release of hormones, and thalamic generators. This model has heuristic value, research implications, and clinical applications.",
"title": ""
},
{
"docid": "57124eb55f82f1927252f27cbafff44d",
"text": "Gamification is a growing phenomenon of interest to both practitioners and researchers. There remains, however, uncertainty about the contours of the field. Defining gamification as “the process of making activities more game-like” focuses on the crucial space between the components that make up games and the holistic experience of gamefulness. It better fits real-world examples and connects gamification with the literature on persuasive design.",
"title": ""
},
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
},
{
"docid": "be5e98bb924a81baa561a3b3870c4a76",
"text": "Objective: Mastitis is one of the most costly diseases in dairy cows, which greatly decreases milk production. Use of antibiotics in cattle leads to antibiotic-resistance of mastitis-causing bacteria. The present study aimed to investigate synergistic effect of silver nanoparticles (AgNPs) with neomycin or gentamicin antibiotic on mastitis-causing Staphylococcus aureus. Materials and Methods: In this study, 46 samples of milk were taken from the cows with clinical and subclinical mastitis during the august-October 2015 sampling period. In addition to biochemical tests, nuc gene amplification by PCR was used to identify strains of Staphylococcus aureus. Disk diffusion test and microdilution were performed to determine minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC). Fractional Inhibitory Concentration (FIC) index was calculated to determine the interaction between a combination of AgNPs and each one of the antibiotics. Results: Twenty strains of Staphylococcus aureus were isolated from 46 milk samples and were confirmed by PCR. Based on disk diffusion test, 35%, 10% and 55% of the strains were respectively susceptible, moderately susceptible and resistant to gentamicin. In addition, 35%, 15% and 50% of the strains were respectively susceptible, moderately susceptible and resistant to neomycin. According to FIC index, gentamicin antibiotic and AgNPs had synergistic effects in 50% of the strains. Furthermore, neomycin antibiotic and AgNPs had synergistic effects in 45% of the strains. Conclusion: It could be concluded that a combination of AgNPs with either gentamicin or neomycin showed synergistic antibacterial properties in Staphylococcus aureus isolates from mastitis. In addition, some hypotheses were proposed to explain antimicrobial mechanism of the combination.",
"title": ""
},
{
"docid": "f3083088c9096bb1932b139098cbd181",
"text": "OBJECTIVE\nMaiming and death due to dog bites are uncommon but preventable tragedies. We postulated that patients admitted to a level I trauma center with dog bites would have severe injuries and that the gravest injuries would be those caused by pit bulls.\n\n\nDESIGN\nWe reviewed the medical records of patients admitted to our level I trauma center with dog bites during a 15-year period. We determined the demographic characteristics of the patients, their outcomes, and the breed and characteristics of the dogs that caused the injuries.\n\n\nRESULTS\nOur Trauma and Emergency Surgery Services treated 228 patients with dog bite injuries; for 82 of those patients, the breed of dog involved was recorded (29 were injured by pit bulls). Compared with attacks by other breeds of dogs, attacks by pit bulls were associated with a higher median Injury Severity Scale score (4 vs. 1; P = 0.002), a higher risk of an admission Glasgow Coma Scale score of 8 or lower (17.2% vs. 0%; P = 0.006), higher median hospital charges ($10,500 vs. $7200; P = 0.003), and a higher risk of death (10.3% vs. 0%; P = 0.041).\n\n\nCONCLUSIONS\nAttacks by pit bulls are associated with higher morbidity rates, higher hospital charges, and a higher risk of death than are attacks by other breeds of dogs. Strict regulation of pit bulls may substantially reduce the US mortality rates related to dog bites.",
"title": ""
},
{
"docid": "40f9065a8dc25d5d6999a98634dafbea",
"text": "Attempts to train a comprehensive artificial intelligence capable of solving multiple tasks have been impeded by a chronic problem called catastrophic forgetting. Although simply replaying all previous data alleviates the problem, it requires large memory and even worse, often infeasible in real world applications where the access to past data is limited. Inspired by the generative nature of the hippocampus as a short-term memory system in primate brain, we propose the Deep Generative Replay, a novel framework with a cooperative dual model architecture consisting of a deep generative model (“generator”) and a task solving model (“solver”). With only these two models, training data for previous tasks can easily be sampled and interleaved with those for a new task. We test our methods in several sequential learning settings involving image classification tasks.",
"title": ""
},
{
"docid": "334cbe19e968cb424719c7efbee9fd20",
"text": "We examine the relationship between scholarly practice and participatory technologies and explore how such technologies invite and reflect the emergence of a new form of scholarship that we call Networked Participatory Scholarship: scholars’ participation in online social networks to share, reflect upon, critique, improve, validate, and otherwise develop their scholarship. We discuss emergent techno-cultural pressures that may influence higher education scholars to reconsider some of the foundational principles upon which scholarship has been established due to the limitations of a pre-digital world, and delineate how scholarship itself is changing with the emergence of certain tools, social behaviors, and cultural expectations associated with participatory technologies. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "270e593aa89fb034d0de977fe6d618b2",
"text": "According to the website AcronymFinder.com which is one of the world's largest and most comprehensive dictionaries of acronyms, an average of 37 new human-edited acronym definitions are added every day. There are 379,918 acronyms with 4,766,899 definitions on that site up to now, and each acronym has 12.5 definitions on average. It is a very important research topic to identify what exactly an acronym means in a given context for document comprehension as well as for document retrieval. In this paper, we propose two word embedding based models for acronym disambiguation. Word embedding is to represent words in a continuous and multidimensional vector space, so that it is easy to calculate the semantic similarity between words by calculating the vector distance. We evaluate the models on MSH Dataset and ScienceWISE Dataset, and both models outperform the state-of-art methods on accuracy. The experimental results show that word embedding helps to improve acronym disambiguation.",
"title": ""
}
] |
scidocsrr
|
c63c27c7e3e2d176948b319b9c2257e2
|
Regularizing Relation Representations by First-order Implications
|
[
{
"docid": "6a7bfed246b83517655cb79a951b1f48",
"text": "Hypernymy, textual entailment, and image captioning can be seen as special cases of a single visual-semantic hierarchy over words, sentences, and images. In this paper we advocate for explicitly modeling the partial order structure of this hierarchy. Towards this goal, we introduce a general method for learning ordered representations, and show how it can be applied to a variety of tasks involving images and language. We show that the resulting representations improve performance over current approaches for hypernym prediction and image-caption retrieval.",
"title": ""
},
{
"docid": "cc15583675d6b19fbd9a10f06876a61e",
"text": "Matrix factorization approaches to relation extraction provide several attractive features: they support distant supervision, handle open schemas, and leverage unlabeled data. Unfortunately, these methods share a shortcoming with all other distantly supervised approaches: they cannot learn to extract target relations without existing data in the knowledge base, and likewise, these models are inaccurate for relations with sparse data. Rule-based extractors, on the other hand, can be easily extended to novel relations and improved for existing but inaccurate relations, through first-order formulae that capture auxiliary domain knowledge. However, usually a large set of such formulae is necessary to achieve generalization. In this paper, we introduce a paradigm for learning low-dimensional embeddings of entity-pairs and relations that combine the advantages of matrix factorization with first-order logic domain knowledge. We introduce simple approaches for estimating such embeddings, as well as a novel training algorithm to jointly optimize over factual and first-order logic information. Our results show that this method is able to learn accurate extractors with little or no distant supervision alignments, while at the same time generalizing to textual patterns that do not appear in the formulae.",
"title": ""
},
{
"docid": "5d2eda181896eadbc56b9d12315062b4",
"text": "Corpus-based distributional semantic models capture degrees of semantic relatedness among the words of very large vocabularies, but have problems with logical phenomena such as entailment, that are instead elegantly handled by model-theoretic approaches, which, in turn, do not scale up. We combine the advantages of the two views by inducing a mapping from distributional vectors of words (or sentences) into a Boolean structure of the kind in which natural language terms are assumed to denote. We evaluate this Boolean Distributional Semantic Model (BDSM) on recognizing entailment between words and sentences. The method achieves results comparable to a state-of-the-art SVM, degrades more gracefully when less training data are available and displays interesting qualitative properties.",
"title": ""
}
] |
[
{
"docid": "f67e221a12e0d8ebb531a1e7c80ff2ff",
"text": "Fine-grained image classification is to recognize hundreds of subcategories belonging to the same basic-level category, such as 200 subcategories belonging to the bird, which is highly challenging due to large variance in the same subcategory and small variance among different subcategories. Existing methods generally first locate the objects or parts and then discriminate which subcategory the image belongs to. However, they mainly have two limitations: 1) relying on object or part annotations which are heavily labor consuming; and 2) ignoring the spatial relationships between the object and its parts as well as among these parts, both of which are significantly helpful for finding discriminative parts. Therefore, this paper proposes the object-part attention model (OPAM) for weakly supervised fine-grained image classification and the main novelties are: 1) object-part attention model integrates two level attentions: object-level attention localizes objects of images, and part-level attention selects discriminative parts of object. Both are jointly employed to learn multi-view and multi-scale features to enhance their mutual promotion; and 2) Object-part spatial constraint model combines two spatial constraints: object spatial constraint ensures selected parts highly representative and part spatial constraint eliminates redundancy and enhances discrimination of selected parts. Both are jointly employed to exploit the subtle and local differences for distinguishing the subcategories. Importantly, neither object nor part annotations are used in our proposed approach, which avoids the heavy labor consumption of labeling. Compared with more than ten state-of-the-art methods on four widely-used datasets, our OPAM approach achieves the best performance.",
"title": ""
},
{
"docid": "302acca2245a0d97cfec06a92dfc1a71",
"text": "concepts of tissue-integrated prostheses with remarkable functional advantages, innovations have resulted in dental implant solutions spanning the spectrum of dental needs. Current discussions concerning the relative merit of an implant versus a 3-unit fixed partial denture fully illustrate the possibility that single implants represent a bona fide choice for tooth replacement. Interestingly, when delving into the detailed comparisons between the outcomes of single-tooth implant versus fixed partial dentures or the intentional replacement of a failing tooth with an implant instead of restoration involving root canal therapy, little emphasis has been placed on the relative esthetic merits of one or another therapeutic approach to tooth replacement therapy. An ideal prosthesis should fully recapitulate or enhance the esthetic features of the tooth or teeth it replaces. Although it is clearly beyond the scope of this article to compare the various methods of esthetic tooth replacement, there is, perhaps, sufficient space to share some insights regarding an objective approach to planning, executing and evaluating the esthetic merit of single-tooth implant restorations.",
"title": ""
},
{
"docid": "9c3050cca4deeb2d94ae5cff883a2d68",
"text": "High speed, low latency obstacle avoidance is essential for enabling Micro Aerial Vehicles (MAVs) to function in cluttered and dynamic environments. While other systems exist that do high-level mapping and 3D path planning for obstacle avoidance, most of these systems require high-powered CPUs on-board or off-board control from a ground station. We present a novel entirely on-board approach, leveraging a light-weight low power stereo vision system on FPGA. Our approach runs at a frame rate of 60 frames a second on VGA-sized images and minimizes latency between image acquisition and performing reactive maneuvers, allowing MAVs to fly more safely and robustly in complex environments. We also suggest our system as a light-weight safety layer for systems undertaking more complex tasks, like mapping the environment. Finally, we show our algorithm implemented on a lightweight, very computationally constrained platform, and demonstrate obstacle avoidance in a variety of environments.",
"title": ""
},
{
"docid": "b6fde2eeef81b222c7472f07190d7e5a",
"text": "Combinatorial testing (also called interaction testing) is an effective specification-based test input generation technique. By now most of research work in combinatorial testing aims to propose novel approaches trying to generate test suites with minimum size that still cover all the pairwise, triple, or n-way combinations of factors. Since the difficulty of solving this problem is demonstrated to be NP-hard, existing approaches have been designed to generate optimal or near optimal combinatorial test suites in polynomial time. In this paper, we try to apply particle swarm optimization (PSO), a kind of meta-heuristic search technique, to pairwise testing (i.e. a special case of combinatorial testing aiming to cover all the pairwise combinations). To systematically build pairwise test suites, we propose two different PSO based algorithms. One algorithm is based on one-test-at-a-time strategy and the other is based on IPO-like strategy. In these two different algorithms, we use PSO to complete the construction of a single test. To successfully apply PSO to cover more uncovered pairwise combinations in this construction process, we provide a detailed description on how to formulate the search space, define the fitness function and set some heuristic settings. To verify the effectiveness of our approach, we implement these algorithms and choose some typical inputs. In our empirical study, we analyze the impact factors of our approach and compare our approach to other well-known approaches. Final empirical results show the effectiveness and efficiency of our approach.",
"title": ""
},
{
"docid": "9d1dc15130b9810f6232b4a3c77e8038",
"text": "This paper argues that we should seek the golden middle way between dynamically and statically typed languages.",
"title": ""
},
{
"docid": "cf26ade7932ba0c5deb01e4b3d2463bb",
"text": "Researchers are often confused about what can be inferred from significance tests. One problem occurs when people apply Bayesian intuitions to significance testing-two approaches that must be firmly separated. This article presents some common situations in which the approaches come to different conclusions; you can see where your intuitions initially lie. The situations include multiple testing, deciding when to stop running participants, and when a theory was thought of relative to finding out results. The interpretation of nonsignificant results has also been persistently problematic in a way that Bayesian inference can clarify. The Bayesian and orthodox approaches are placed in the context of different notions of rationality, and I accuse myself and others as having been irrational in the way we have been using statistics on a key notion of rationality. The reader is shown how to apply Bayesian inference in practice, using free online software, to allow more coherent inferences from data.",
"title": ""
},
{
"docid": "ff1c454fc735c49325a62909cc0e27e8",
"text": "We describe a framework for characterizing people’s behavior with Digital Live Art. Our framework considers people’s wittingness, technical skill, and interpretive abilities in relation to the performance frame. Three key categories of behavior with respect to the performance frame are proposed: performing, participating, and spectating. We exemplify the use of our framework by characterizing people’s interaction with a DLA iPoi. This DLA is based on the ancient Maori art form of poi and employs a wireless, peer-to-peer exertion interface. The design goal of iPoi is to draw people into the performance frame and support transitions from audience to participant and on to performer. We reflect on iPoi in a public performance and outline its key design features.",
"title": ""
},
{
"docid": "bc955a52d08f192b06844721fcf635a0",
"text": "Total quality management (TQM) has been widely considered as the strategic, tactical and operational tool in the quality management research field. It is one of the most applied and well accepted approaches for business excellence besides Continuous Quality Improvement (CQI), Six Sigma, Just-in-Time (JIT), and Supply Chain Management (SCM) approaches. There is a great enthusiasm among manufacturing and service industries in adopting and implementing this strategy in order to maintain their sustainable competitive advantage. The aim of this study is to develop and propose the conceptual framework and research model of TQM implementation in relation to company performance particularly in context with the Indian service companies. It examines the relationships between TQM and company’s performance by measuring the quality performance as performance indicator. A comprehensive review of literature on TQM and quality performance was carried out to accomplish the objectives of this study and a research model and hypotheses were generated. Two research questions and 34 hypotheses were proposed to re-validate the TQM practices. The adoption of such a theoretical model on TQM and company’s quality performance would help managers, decision makers, and practitioners of TQM in better understanding of the TQM practices and to focus on the identified practices while implementing TQM in their companies. Further, the scope for future study is to test and validate the theoretical model by collecting the primary data from the Indian service companies and using Structural Equation Modeling (SEM) approach for hypotheses testing.",
"title": ""
},
{
"docid": "d10c1659bc8f166c077a658421fbd388",
"text": "Blockchain-based distributed computing platforms enable the trusted execution of computation—defined in the form of smart contracts—without trusted agents. Smart contracts are envisioned to have a variety of applications, ranging from financial to IoT asset tracking. Unfortunately, the development of smart contracts has proven to be extremely error prone. In practice, contracts are riddled with security vulnerabilities comprising a critical issue since bugs are by design nonfixable and contracts may handle financial assets of significant value. To facilitate the development of secure smart contracts, we have created the FSolidM framework, which allows developers to define contracts as finite state machines (FSMs) with rigorous and clear semantics. FSolidM provides an easy-to-use graphical editor for specifying FSMs, a code generator for creating Ethereum smart contracts, and a set of plugins that developers may add to their FSMs to enhance security and functionality.",
"title": ""
},
{
"docid": "d9240bad8516bea63f9340bcde366ee4",
"text": "This paper describes a novel feature selection algorithm for unsupervised clustering, that combines the clustering ensembles method and the population based incremental learning algorithm. The main idea of the proposed unsupervised feature selection algorithm is to search for a subset of all features such that the clustering algorithm trained on this feature subset can achieve the most similar clustering solution to the one obtained by an ensemble learning algorithm. In particular, a clustering solution is firstly achieved by a clustering ensembles method, then the population based incremental learning algorithm is adopted to find the feature subset that best fits the obtained clustering solution. One advantage of the proposed unsupervised feature selection algorithm is that it is dimensionality-unbiased. In addition, the proposed unsupervised feature selection algorithm leverages the consensus across multiple clustering solutions. Experimental results on several real data sets demonstrate that the proposed unsupervised feature selection algorithm is often able to obtain a better feature subset when compared with other existing unsupervised feature selection algorithms.",
"title": ""
},
{
"docid": "2271085513d9239225c9bfb2f6b155b1",
"text": "Information Security has become an important issue in data communication. Encryption algorithms have come up as a solution and play an important role in information security system. On other side, those algorithms consume a significant amount of computing resources such as CPU time, memory and battery power. Therefore it is essential to measure the performance of encryption algorithms. In this work, three encryption algorithms namely DES, AES and Blowfish are analyzed by considering certain performance metrics such as execution time, memory required for implementation and throughput. Based on the experiments, it has been concluded that the Blowfish is the best performing algorithm among the algorithms chosen for implementation.",
"title": ""
},
{
"docid": "aa2bf057322c9a8d2c7d1ce7d6a384d3",
"text": "Our team is currently developing an Automated Cyber Red Teaming system that, when given a model-based capture of an organisation's network, uses automated planning techniques to generate and assess multi-stage attacks. Specific to this paper, we discuss our development of the visual analytic component of this system. Through various views that display network attacks paths at different levels of abstraction, our tool aims to enhance cyber situation awareness of human decision makers.",
"title": ""
},
{
"docid": "00f2bb2dd3840379c2442c018407b1c8",
"text": "BACKGROUND\nFacebook is a social networking site (SNS) for communication, entertainment and information exchange. Recent research has shown that excessive use of Facebook can result in addictive behavior in some individuals.\n\n\nAIM\nTo assess the patterns of Facebook use in post-graduate students of Yenepoya University and evaluate its association with loneliness.\n\n\nMETHODS\nA cross-sectional study was done to evaluate 100 post-graduate students of Yenepoya University using Bergen Facebook Addiction Scale (BFAS) and University of California and Los Angeles (UCLA) loneliness scale version 3. Descriptive statistics were applied. Pearson's bivariate correlation was done to see the relationship between severity of Facebook addiction and the experience of loneliness.\n\n\nRESULTS\nMore than one-fourth (26%) of the study participants had Facebook addiction and 33% had a possibility of Facebook addiction. There was a significant positive correlation between severity of Facebook addiction and extent of experience of loneliness ( r = .239, p = .017).\n\n\nCONCLUSION\nWith the rapid growth of popularity and user-base of Facebook, a significant portion of the individuals are susceptible to develop addictive behaviors related to Facebook use. Loneliness is a factor which influences addiction to Facebook.",
"title": ""
},
{
"docid": "7167964274b05da06beddb1aef119b2c",
"text": "A great variety of systems in nature, society and technology—from the web of sexual contacts to the Internet, from the nervous system to power grids—can be modeled as graphs of vertices coupled by edges. The network structure, describing how the graph is wired, helps us understand, predict and optimize the behavior of dynamical systems. In many cases, however, the edges are not continuously active. As an example, in networks of communication via email, text messages, or phone calls, edges represent sequences of instantaneous or practically instantaneous contacts. In some cases, edges are active for non-negligible periods of time: e.g., the proximity patterns of inpatients at hospitals can be represented by a graph where an edge between two individuals is on throughout the time they are at the same ward. Like network topology, the temporal structure of edge activations can affect dynamics of systems interacting through the network, from disease contagion on the network of patients to information diffusion over an e-mail network. In this review, we present the emergent field of temporal networks, and discuss methods for analyzing topological and temporal structure and models for elucidating their relation to the behavior of dynamical systems. In the light of traditional network theory, one can see this framework as moving the information of when things happen from the dynamical system on the network, to the network itself. Since fundamental properties, such as the transitivity of edges, do not necessarily hold in temporal networks, many of these methods need to be quite different from those for static networks. The study of temporal networks is very interdisciplinary in nature. Reflecting this, even the object of study has many names—temporal graphs, evolving graphs, time-varying graphs, time-aggregated graphs, time-stamped graphs, dynamic networks, dynamic graphs, dynamical graphs, and so on. This review covers different fields where temporal graphs are considered, but does not attempt to unify related terminology—rather, we want to make papers readable across disciplines.",
"title": ""
},
{
"docid": "26f2b200bf22006ab54051c9288420e8",
"text": "Emotion keyword spotting approach can detect emotion well for explicit emotional contents while it obviously cannot compare to supervised learning approaches for detecting emotional contents of particular events. In this paper, we target earthquake situations in Japan as the particular events for emotion analysis because the affected people often show their states and emotions towards the situations via social networking sites. Additionally, tracking crowd emotions in the Internet during the earthquakes can help authorities to quickly decide appropriate assistance policies without paying the cost as the traditional public surveys. Our three main contributions in this paper are: a) the appropriate choice of emotions; b) the novel proposal of two classification methods for determining the earthquake related tweets and automatically identifying the emotions in Twitter; c) tracking crowd emotions during different earthquake situations, a completely new application of emotion analysis research. Our main analysis results show that Twitter users show their Fear and Anxiety right after the earthquakes occurred while Calm and Unpleasantness are not showed clearly during the small earthquakes but in the large tremor.",
"title": ""
},
{
"docid": "0f01dbd1e554ee53ca79258610d835c1",
"text": "Received: 18 March 2009 Revised: 18 March 2009 Accepted: 20 April 2009 Abstract Adaptive visualization is a new approach at the crossroads of user modeling and information visualization. Taking into account information about a user, adaptive visualization attempts to provide user-adapted visual presentation of information. This paper proposes Adaptive VIBE, an approach for adaptive visualization of search results in an intelligence analysis context. Adaptive VIBE extends the popular VIBE visualization framework by infusing user model terms as reference points for spatial document arrangement and manipulation. We explored the value of the proposed approach using data obtained from a user study. The result demonstrated that user modeling and spatial visualization technologies are able to reinforce each other, creating an enhanced level of user support. Spatial visualization amplifies the user model's ability to separate relevant and non-relevant documents, whereas user modeling adds valuable reference points to relevance-based spatial visualization. Information Visualization (2009) 8, 167--179. doi:10.1057/ivs.2009.12",
"title": ""
},
{
"docid": "443637fcc9f9efcf1026bb64aa0a9c97",
"text": "Given the unprecedented availability of data and computing resources, there is widespread renewed interest in applying data-driven machine learning methods to problems for which the development of conventional engineering solutions is challenged by modeling or algorithmic deficiencies. This tutorial-style paper starts by addressing the questions of why and when such techniques can be useful. It then provides a high-level introduction to the basics of supervised and unsupervised learning. For both supervised and unsupervised learning, exemplifying applications to communication networks are discussed by distinguishing tasks carried out at the edge and at the cloud segments of the network at different layers of the protocol stack, with an emphasis on the physical layer.",
"title": ""
},
{
"docid": "d60f812bb8036a2220dab8740f6a74c4",
"text": "UNLABELLED\nThe limit of the Colletotrichum gloeosporioides species complex is defined genetically, based on a strongly supported clade within the Colletotrichum ITS gene tree. All taxa accepted within this clade are morphologically more or less typical of the broadly defined C. gloeosporioides, as it has been applied in the literature for the past 50 years. We accept 22 species plus one subspecies within the C. gloeosporioides complex. These include C. asianum, C. cordylinicola, C. fructicola, C. gloeosporioides, C. horii, C. kahawae subsp. kahawae, C. musae, C. nupharicola, C. psidii, C. siamense, C. theobromicola, C. tropicale, and C. xanthorrhoeae, along with the taxa described here as new, C. aenigma, C. aeschynomenes, C. alatae, C. alienum, C. aotearoa, C. clidemiae, C. kahawae subsp. ciggaro, C. salsolae, and C. ti, plus the nom. nov. C. queenslandicum (for C. gloeosporioides var. minus). All of the taxa are defined genetically on the basis of multi-gene phylogenies. Brief morphological descriptions are provided for species where no modern description is available. Many of the species are unable to be reliably distinguished using ITS, the official barcoding gene for fungi. Particularly problematic are a set of species genetically close to C. musae and another set of species genetically close to C. kahawae, referred to here as the Musae clade and the Kahawae clade, respectively. Each clade contains several species that are phylogenetically well supported in multi-gene analyses, but within the clades branch lengths are short because of the small number of phylogenetically informative characters, and in a few cases individual gene trees are incongruent. Some single genes or combinations of genes, such as glyceraldehyde-3-phosphate dehydrogenase and glutamine synthetase, can be used to reliably distinguish most taxa and will need to be developed as secondary barcodes for species level identification, which is important because many of these fungi are of biosecurity significance. In addition to the accepted species, notes are provided for names where a possible close relationship with C. gloeosporioides sensu lato has been suggested in the recent literature, along with all subspecific taxa and formae speciales within C. gloeosporioides and its putative teleomorph Glomerella cingulata.\n\n\nTAXONOMIC NOVELTIES\nName replacement - C. queenslandicum B. Weir & P.R. Johnst. New species - C. aenigma B. Weir & P.R. Johnst., C. aeschynomenes B. Weir & P.R. Johnst., C. alatae B. Weir & P.R. Johnst., C. alienum B. Weir & P.R. Johnst, C. aotearoa B. Weir & P.R. Johnst., C. clidemiae B. Weir & P.R. Johnst., C. salsolae B. Weir & P.R. Johnst., C. ti B. Weir & P.R. Johnst. New subspecies - C. kahawae subsp. ciggaro B. Weir & P.R. Johnst. Typification: Epitypification - C. queenslandicum B. Weir & P.R. Johnst.",
"title": ""
},
{
"docid": "0cd2da131bf78526c890dae72514a8f0",
"text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "4b28bc08ebeaf9be27ce642e622e064d",
"text": "Homogeneity analysis combines the idea of maximizing the correlations between variables of a multivariate data set with that of optimal scaling. In this article we present methodological and practical issues of the R package homals which performs homogeneity analysis and various extensions. By setting rank constraints nonlinear principal component analysis can be performed. The variables can be partitioned into sets such that homogeneity analysis is extended to nonlinear canonical correlation analysis or to predictive models which emulate discriminant analysis and regression models. For each model the scale level of the variables can be taken into account by setting level constraints. All algorithms allow for missing values.",
"title": ""
}
] |
scidocsrr
|
0d88654662c00b9af81177886c26ca45
|
ARGUS: An Automated Multi-Agent Visitor Identification System
|
[
{
"docid": "ab34db97483f2868697f7b0abab8daaa",
"text": "This paper surveys locally weighted learning, a form of lazy learning and memory-based learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning fit parameters, interference between old and new data, implementing locally weighted learning efficiently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control.",
"title": ""
}
] |
[
{
"docid": "07239163734357138011bbcc7b9fd38f",
"text": "Open cross-section, thin-walled, cold-formed steel columns have at least three competing buckling modes: local, dis and Euler~i.e., flexural or flexural-torsional ! buckling. Closed-form prediction of the buckling stress in the local mode, includ interaction of the connected elements, and the distortional mode, including consideration of the elastic and geometric stiffne web/flange juncture, are provided and shown to agree well with numerical methods. Numerical analyses and experiments postbuckling capacity in the distortional mode is lower than in the local mode. Current North American design specificati cold-formed steel columns ignore local buckling interaction and do not provide an explicit check for distortional buckling. E experiments on cold-formed channel, zed, and rack columns indicate inconsistency and systematic error in current design me provide validation for alternative methods. A new method is proposed for design that explicitly incorporates local, distortional an buckling, does not require calculations of effective width and/or effective properties, gives reliable predictions devoid of systema and provides a means to introduce rational analysis for elastic buckling prediction into the design of thin-walled columns. DOI: 10.1061/ ~ASCE!0733-9445~2002!128:3~289! CE Database keywords: Thin-wall structures; Columns; Buckling; Cold-formed steel.",
"title": ""
},
{
"docid": "d8b19c953cc66b6157b87da402dea98a",
"text": "In this paper we propose a new semi-supervised GAN architecture (ss-InfoGAN) for image synthesis that leverages information from few labels (as little as 0.22%, max. 10% of the dataset) to learn semantically meaningful and controllable data representations where latent variables correspond to label categories. The architecture builds on Information Maximizing Generative Adversarial Networks (InfoGAN) and is shown to learn both continuous and categorical codes and achieves higher quality of synthetic samples compared to fully unsupervised settings. Furthermore, we show that using small amounts of labeled data speeds-up training convergence. The architecture maintains the ability to disentangle latent variables for which no labels are available. Finally, we contribute an information-theoretic reasoning on how introducing semi-supervision increases mutual information between synthetic and real data.",
"title": ""
},
{
"docid": "6adb6b6a60177ed445a300a2a02c30b9",
"text": "Barrier coverage is a critical issue in wireless sensor networks for security applications (e.g., border protection) where directional sensors (e.g., cameras) are becoming more popular than omni-directional scalar sensors (e.g., microphones). However, barrier coverage cannot be guaranteed after initial random deployment of sensors, especially for directional sensors with limited sensing angles. In this paper, we study how to efficiently use mobile sensors to achieve \\(k\\) -barrier coverage. In particular, two problems are studied under two scenarios. First, when only the stationary sensors have been deployed, what is the minimum number of mobile sensors required to form \\(k\\) -barrier coverage? Second, when both the stationary and mobile sensors have been pre-deployed, what is the maximum number of barriers that could be formed? To solve these problems, we introduce a novel concept of weighted barrier graph (WBG) and prove that determining the minimum number of mobile sensors required to form \\(k\\) -barrier coverage is related with finding \\(k\\) vertex-disjoint paths with the minimum total length on the WBG. With this observation, we propose an optimal solution and a greedy solution for each of the two problems. Both analytical and experimental studies demonstrate the effectiveness of the proposed algorithms.",
"title": ""
},
{
"docid": "531d387a14eefa6a8c45ad64039f29be",
"text": "This paper presents an S-Transform based probabilistic neural network (PNN) classifier for recognition of power quality (PQ) disturbances. The proposed method requires less number of features as compared to wavelet based approach for the identification of PQ events. The features extracted through the S-Transform are trained by a PNN for automatic classification of the PQ events. Since the proposed methodology can reduce the features of the disturbance signal to a great extent without losing its original property, less memory space and learning PNN time are required for classification. Eleven types of disturbances are considered for the classification problem. The simulation results reveal that the combination of S-Transform and PNN can effectively detect and classify different PQ events. The classification performance of PNN is compared with a feedforward multilayer (FFML) neural network (NN) and learning vector quantization (LVQ) NN. It is found that the classification performance of PNN is better than both FFML and LVQ.",
"title": ""
},
{
"docid": "44928aa4c5b294d1b8f24eaab14e9ce7",
"text": "Most exact algorithms for solving partially observable Markov decision processes (POMDPs) are based on a form of dynamic programming in which a piecewise-linear and convex representation of the value function is updated at every iteration to more accurately approximate the true value function. However, the process is computationally expensive, thus limiting the practical application of POMDPs in planning. To address this current limitation, we present a parallel distributed algorithm based on the Restricted Region method proposed by Cassandra, Littman and Zhang [1]. We compare performance of the parallel algorithm against a serial implementation Restricted Region.",
"title": ""
},
{
"docid": "e082b7792f72d54c63ed025ae5c7fa0f",
"text": "Cloud computing's pay-per-use model greatly reduces upfront cost and also enables on-demand scalability as service demand grows or shrinks. Hybrid clouds are an attractive option in terms of cost benefit, however, without proper elastic resource management, computational resources could be over-provisioned or under-provisioned, resulting in wasting money or failing to satisfy service demand. In this paper, to accomplish accurate performance prediction and cost-optimal resource management for hybrid clouds, we introduce Workload-tailored Elastic Compute Units (WECU) as a measure of computing resources analogous to Amazon EC2's ECUs, but customized for a specific workload. We present a dynamic programming-based scheduling algorithm to select a combination of private and public resources which satisfy a desired throughput. Using a loosely-coupled benchmark, we confirmed WECUs have 24 (J% better runtime prediction ability than ECUs on average. Moreover, simulation results with a real workload distribution of web service requests show that our WECU-based algorithm reduces costs by 8-31% compared to a fixed provisioning approach.",
"title": ""
},
{
"docid": "c998ec64fed8be3657f26f4128593581",
"text": "In recent years, data mining researchers have developed efficient association rule algorithms for retail market basket analysis. Still, retailers often complain about how to adopt association rules to optimize concrete retail marketing-mix decisions. It is in this context that, in a previous paper, the authors have introduced a product selection model called PROFSET. This model selects the most interesting products from a product assortment based on their cross-selling potential given some retailer defined constraints. However this model suffered from an important deficiency: it could not deal effectively with supermarket data, and no provisions were taken to include retail category management principles. Therefore, in this paper, the authors present an important generalization of the existing model in order to make it suitable for supermarket data as well, and to enable retailers to add category restrictions to the model. Experiments on real world data obtained from a Belgian supermarket chain produce very promising results and demonstrate the effectiveness of the generalized PROFSET model.",
"title": ""
},
{
"docid": "12a5fb7867cddaca43c3508b0c1a1ed2",
"text": "The class scheduling problem can be modeled by a graph where the vertices and edges represent the courses and the common students, respectively. The problem is to assign the courses a given number of time slots (colors), where each time slot can be used for a given number of class rooms. The Vertex Coloring (VC) algorithm is a polynomial time algorithm which produces a conflict free solution using the least number of colors [9]. However, the VC solution may not be implementable because it uses a number of time slots that exceed the available ones with unbalanced use of class rooms. We propose a heuristic approach VC* to (1) promote uniform distribution of courses over the colors and to (2) balance course load for each time slot over the available class rooms. The performance function represents the percentage of students in all courses that could not be mapped to time slots or to class rooms. A randomized simulation of registration of four departments with up to 1200 students is used to evaluate the performance of proposed heuristic.",
"title": ""
},
{
"docid": "8671020512161e35818565450e557f77",
"text": "Continuous vector representations of words and objects appear to carry surprisingly rich semantic content. In this paper, we advance both the conceptual and theoretical understanding of word embeddings in three ways. First, we ground embeddings in semantic spaces studied in cognitivepsychometric literature and introduce new evaluation tasks. Second, in contrast to prior work, we take metric recovery as the key object of study, unify existing algorithms as consistent metric recovery methods based on co-occurrence counts from simple Markov random walks, and propose a new recovery algorithm. Third, we generalize metric recovery to graphs and manifolds, relating co-occurence counts on random walks in graphs and random processes on manifolds to the underlying metric to be recovered, thereby reconciling manifold estimation and embedding algorithms. We compare embedding algorithms across a range of tasks, from nonlinear dimensionality reduction to three semantic language tasks, including analogies, sequence completion, and classification.",
"title": ""
},
{
"docid": "2037a51f2f67fd480f327ecd1b906a2a",
"text": "Electronic fashion (eFashion) garments use technology to augment the human body with wearable interaction. In developing ideas, eFashion designers need to prototype the role and behavior of the interactive garment in context; however, current wearable prototyping toolkits require semi-permanent construction with physical materials that cannot easily be altered. We present Bod-IDE, an augmented reality 'mirror' that allows eFashion designers to create virtual interactive garment prototypes. Designers can quickly build, refine, and test on-the-body interactions without the need to connect or program electronics. By envisioning interaction with the body in mind, eFashion designers can focus more on reimagining the relationship between bodies, clothing, and technology.",
"title": ""
},
{
"docid": "556f33d199e6516a4aa8ebca998facf2",
"text": "R ecommender systems have become important tools in ecommerce. They combine one user’s ratings of products or services with ratings from other users to answer queries such as “Would I like X?” with predictions and suggestions. Users thus receive anonymous recommendations from people with similar tastes. While this process seems innocuous, it aggregates user preferences in ways analogous to statistical database queries, which can be exploited to identify information about a particular user. This is especially true for users with eclectic tastes who rate products across different types or domains in the systems. These straddlers highlight the conflict between personalization and privacy in recommender systems. While straddlers enable serendipitous recommendations, information about their existence could be used in conjunction with other data sources to uncover identities and reveal personal details. We use a graph-theoretic model to study the benefit from and risk to straddlers.",
"title": ""
},
{
"docid": "26f2c0ca1d9fc041a6b04f0644e642dd",
"text": "During the last years, in particular due to the Digital Humanities, empirical processes, data capturing or data analysis got more and more popular as part of humanities research. In this paper, we want to show that even the complete scientific method of natural science can be applied in the humanities. By applying the scientific method to the humanities, certain kinds of problems can be solved in a confirmable and replicable manner. In particular, we will argue that patterns may be perceived as the analogon to formulas in natural science. This may provide a new way of representing solution-oriented knowledge in the humanities. Keywords-pattern; pattern languages; digital humanities;",
"title": ""
},
{
"docid": "a9de4aa3f0268f23d77f882425afbcd5",
"text": "This paper describes a CMOS-based time-of-flight depth sensor and presents some experimental data while addressing various issues arising from its use. Our system is a single-chip solution based on a special CMOS pixel structure that can extract phase information from the received light pulses. The sensor chip integrates a 64x64 pixel array with a high-speed clock generator and ADC. A unique advantage of the chip is that it can be manufactured with an ordinary CMOS process. Compared with other types of depth sensors reported in the literature, our solution offers significant advantages, including superior accuracy, high frame rate, cost effectiveness and a drastic reduction in processing required to construct the depth maps. We explain the factors that determine the resolution of our system, discuss various problems that a time-of-flight depth sensor might face, and propose practical solutions.",
"title": ""
},
{
"docid": "c1017772d5986b4676fc36b14791f735",
"text": "Research and industry increasingly make use of large amounts of data to guide decision-making. To do this, however, data needs to be analyzed in typically nontrivial refinement processes, which require technical expertise about methods and algorithms, experience with how a precise analysis should proceed, and knowledge about an exploding number of analytic approaches. To alleviate these problems, a plethora of different systems have been proposed that “intelligently” help users to analyze their data.\n This article provides a first survey to almost 30 years of research on intelligent discovery assistants (IDAs). It explicates the types of help IDAs can provide to users and the kinds of (background) knowledge they leverage to provide this help. Furthermore, it provides an overview of the systems developed over the past years, identifies their most important features, and sketches an ideal future IDA as well as the challenges on the road ahead.",
"title": ""
},
{
"docid": "179c8e5dba09c4bac35522b0f8a2be88",
"text": "The purpose of this research was to explore the experiences of left-handed adults. Four semistructured interviews were conducted with left-handed adults (2 men and 2 women) about their experiences. After transcribing the data, interpretative phenomenological analysis (IPA), which is a qualitative approach, was utilized to analyze the data. The analysis highlighted some major themes which were organized to make a model of life experiences of left-handers. The highlighted themes included Left-handers‟ Development: Interplay of Heredity Basis and Environmental Influences, Suppression of Left-hand, Support and Consideration, Feeling It Is Okay, Left-handers as Being Particular, Physical and Psychological Health Challenges, Agonized Life, Struggle for Maintaining Identity and Transforming Attitude, Attitudinal Barriers to Equality and Acceptance. Implications of the research for parents, teachers, and psychologists are discussed.",
"title": ""
},
{
"docid": "246faf136fc925f151f7006a08fcee2d",
"text": "A key goal of smart grid initiatives is significantly increasing the fraction of grid energy contributed by renewables. One challenge with integrating renewables into the grid is that their power generation is intermittent and uncontrollable. Thus, predicting future renewable generation is important, since the grid must dispatch generators to satisfy demand as generation varies. While manually developing sophisticated prediction models may be feasible for large-scale solar farms, developing them for distributed generation at millions of homes throughout the grid is a challenging problem. To address the problem, in this paper, we explore automatically creating site-specific prediction models for solar power generation from National Weather Service (NWS) weather forecasts using machine learning techniques. We compare multiple regression techniques for generating prediction models, including linear least squares and support vector machines using multiple kernel functions. We evaluate the accuracy of each model using historical NWS forecasts and solar intensity readings from a weather station deployment for nearly a year. Our results show that SVM-based prediction models built using seven distinct weather forecast metrics are 27% more accurate for our site than existing forecast-based models.",
"title": ""
},
{
"docid": "8ea9944c2326e3aba41701919045779f",
"text": "Accurate keypoint localization of human pose needs diversified features: the high level for contextual dependencies and the low level for detailed refinement of joints. However, the importance of the two factors varies from case to case, but how to efficiently use the features is still an open problem. Existing methods have limitations in preserving low level features, adaptively adjusting the importance of different levels of features, and modeling the human perception process. This paper presents three novel techniques step by step to efficiently utilize different levels of features for human pose estimation. Firstly, an inception of inception (IOI) block is designed to emphasize the low level features. Secondly, an attention mechanism is proposed to adjust the importance of individual levels according to the context. Thirdly, a cascaded network is proposed to sequentially localize the joints to enforce message passing from joints of stand-alone parts like head and torso to remote joints like wrist or ankle. Experimental results demonstrate that the proposed method achieves the state-of-the-art performance on both MPII and",
"title": ""
},
{
"docid": "7264274530c5e42e09f91592f606364b",
"text": "In this paper, we propose a novel NoC architecture, called darkNoC, where multiple layers of architecturally identical, but physically different routers are integrated, leveraging the extra transistors available due to dark silicon. Each layer is separately optimized for a particular voltage-frequency range by the adroit use of multi-Vt circuit optimization. At a given time, only one of the network layers is illuminated while all the other network layers are dark. We provide architectural support for seamless integration of multiple network layers, and a fast inter-layer switching mechanism without dropping in-network packets. Our experiments on a 4 × 4 mesh with multi-programmed real application workloads show that darkNoC improves energy-delay product by up to 56% compared to a traditional single layer NoC with state-of-the-art DVFS. This illustrates darkNoC can be used as an energy-efficient communication fabric in future dark silicon chips.",
"title": ""
},
{
"docid": "ca7fa11ff559a572499fc0f319cba83c",
"text": "54-year-old male with a history of necrolytic migratory erythema (NME) (Figs. 1 and 2) and glossitis. Routine blood tests were normal except for glucose: 145 mg/dL. Baseline plasma glucagon levels were 1200 pg/mL. Serum zinc was 97 mcg/dL. An abdominal CT scan showed a large mass involving the body and tail of the pancreas. A dis-tal pancreatectomy with splenectomy was performed and no evidence of metastatic disease was observed. The skin rash cleared within a week after the operation and the patient remains free of disease at 38 months following surgery. COMMENTS Glucagonoma syndrome is a paraneoplastic phenomenon comprising a pancreatic glucagon-secreting insular tumor, necrolytic migratory erythema (NME), diabetes, weight loss, anemia, stomatitis, thrombo-embolism, dyspepsia, and neu-ropsychiatric disturbances. The occurrence of one or more of these symptoms associated with a proven pancreatic neo-plasm fits this diagnosis (1). Other skin and mucosal changes such as atrophic glossitis, cheylitis, and inflammation of the oral mucosa may be found (2). Hyperglycemia may be included –multiple endocrine neoplasia syndrome, i.e., Zollinger-Ellison syndrome– or the disease may result from a glucagon-secreting tumor alone. These tumors are of slow growth and present with non-specific symptoms in early stages. At least 50% are metasta-tic at the time of diagnosis, and therefore have a poor prognosis. Fig. 2.-NME. Necrolytic migratory erythema. ENM. Eritema necrolítico migratorio.",
"title": ""
},
{
"docid": "3789f0298f0ad7935e9267fa64c33a59",
"text": "We study session key distribution in the three-party setting of Needham and Schroeder. (This is the trust model assumed by the popular Kerberos authentication system.) Such protocols are basic building blocks for contemporary distributed systems|yet the underlying problem has, up until now, lacked a de nition or provably-good solution. One consequence is that incorrect protocols have proliferated. This paper provides the rst treatment of this problem in the complexitytheoretic framework of modern cryptography. We present a de nition, protocol, and a proof that the protocol satis es the de nition, assuming the (minimal) assumption of a pseudorandom function. When this assumption is appropriately instantiated, our protocols are simple and e cient. Abstract appearing in Proceedings of the 27th ACM Symposium on the Theory of Computing, May 1995.",
"title": ""
}
] |
scidocsrr
|
abe3acf92e474cdec6fd595d6388dd7b
|
Sensor for Measuring Strain in Textile
|
[
{
"docid": "13d8ce0c85befb38e6f2da583ac0295b",
"text": "The addition of sensors to wearable computers allows them to adapt their functions to more suit the activities and situation of their wearers. A wearable sensor badge is described constructed from (hard) electronic components, which can sense perambulatory activities for context-awareness. A wearable sensor jacket is described that uses advanced knitting techniques to form (soft) fabric stretch sensors positioned to measure upper limb and body movement. Worn on-the-hip, or worn as clothing, these unobtrusive sensors supply abstract information about your current activity to your other wearable computers.",
"title": ""
}
] |
[
{
"docid": "df96263c86a36ed30e8a074354b09239",
"text": "We propose three iterative superimposed-pilot based channel estimators for Orthogonal Frequency Division Multiplexing (OFDM) systems. Two are approximate maximum-likelihood, derived by using a Taylor expansion of the conditional probability density function of the received signal or by approximating the OFDM time signal as Gaussian, and one is minimum-mean square error. The complexity per iteration of these estimators is given by approximately O(NL2), O(N3) and O(NL), where N is the number of OFDM subcarriers and L is the channel length (time). Two direct (non-iterative) data detectors are also derived by averaging the log likelihood function over the channel statistics. These detectors require minimising the cost metric in an integer space, and we suggest the use of the sphere decoder for them. The Cramér--Rao bound for superimposed pilot based channel estimation is derived, and this bound is achieved by the proposed estimators. The optimal pilot placement is shown to be the equally spaced distribution of pilots. The bit error rate of the proposed estimators is simulated for N = 32 OFDM system. Our estimators perform fairly close to a separated training scheme, but without any loss of spectral efficiency. Copyright © 2011 John Wiley & Sons, Ltd. *Correspondence Chintha Tellambura, Department of Electrical and Computer Engineering, University Alberta, Edmonton, Alberta, Canada T6G 2C5. E-mail: [email protected] Received 20 July 2009; Revised 23 July 2010; Accepted 13 October 2010",
"title": ""
},
{
"docid": "f90471c5c767c40be52c182055c9ebca",
"text": "Deep intracranial tumor removal can be achieved if the neurosurgical robot has sufficient flexibility and stability. Toward achieving this goal, we have developed a spring-based continuum robot, namely a minimally invasive neurosurgical intracranial robot (MINIR-II) with novel tendon routing and tunable stiffness for use in a magnetic resonance imaging (MRI) environment. The robot consists of a pair of springs in parallel, i.e., an inner interconnected spring that promotes flexibility with decoupled segment motion and an outer spring that maintains its smooth curved shape during its interaction with the tissue. We propose a shape memory alloy (SMA) spring backbone that provides local stiffness control and a tendon routing configuration that enables independent segment locking. In this paper, we also present a detailed local stiffness analysis of the SMA backbone and model the relationship between the resistive force at the robot tip and the tension in the tendon. We also demonstrate through experiments, the validity of our local stiffness model of the SMA backbone and the correlation between the tendon tension and the resistive force. We also performed MRI compatibility studies of the three-segment MINIR-II robot by attaching it to a robotic platform that consists of SMA spring actuators with integrated water cooling modules.",
"title": ""
},
{
"docid": "ff04d4c2b6b39f53e7ddb11d157b9662",
"text": "Chiu proposed a clustering algorithm adjusting the numeric feature weights automatically for k-anonymity implementation and this approach gave a better clustering quality over the traditional generalization and suppression methods. In this paper, we propose an improved weighted-feature clustering algorithm which takes the weight of categorical attributes and the thesis of optimal k-partition into consideration. To show the effectiveness of our method, we do some information loss experiments to compare it with greedy k-member clustering algorithm.",
"title": ""
},
{
"docid": "921cb9021dc606af3b63116c45e093b2",
"text": "Since its introduction, the orbitrap has proven to be a robust mass analyzer that can routinely deliver high resolving power and mass accuracy. Unlike conventional ion traps such as the Paul and Penning traps, the orbitrap uses only electrostatic fields to confine and to analyze injected ion populations. In addition, its relatively low cost, simple design and high space-charge capacity make it suitable for tackling complex scientific problems in which high performance is required. This review begins with a brief account of the set of inventions that led to the orbitrap, followed by a qualitative description of ion capture, ion motion in the trap and modes of detection. Various orbitrap instruments, including the commercially available linear ion trap-orbitrap hybrid mass spectrometers, are also discussed with emphasis on the different methods used to inject ions into the trap. Figures of merit such as resolving power, mass accuracy, dynamic range and sensitivity of each type of instrument are compared. In addition, experimental techniques that allow mass-selective manipulation of the motion of confined ions and their potential application in tandem mass spectrometry in the orbitrap are described. Finally, some specific applications are reviewed to illustrate the performance and versatility of the orbitrap mass spectrometers.",
"title": ""
},
{
"docid": "c385054322970c86d3f08b298aa811e2",
"text": "Recently, a small number of papers have appeared in which the authors implement stochastic search algorithms, such as evolutionary computation, to generate game content, such as levels, rules and weapons. We propose a taxonomy of such approaches, centring on what sort of content is generated, how the content is represented, and how the quality of the content is evaluated. The relation between search-based and other types of procedural content generation is described, as are some of the main research challenges in this new field. The paper ends with some successful examples of this approach.",
"title": ""
},
{
"docid": "93625a1cc77929e98a3bdbf30ac16f3a",
"text": "The performance of rasterization-based rendering on current GPUs strongly depends on the abilities to avoid overdraw and to prevent rendering triangles smaller than the pixel size. Otherwise, the rates at which highresolution polygon models can be displayed are affected significantly. Instead of trying to build these abilities into the rasterization-based rendering pipeline, we propose an alternative rendering pipeline implementation that uses rasterization and ray-casting in every frame simultaneously to determine eye-ray intersections. To make ray-casting competitive with rasterization, we introduce a memory-efficient sample-based data structure which gives rise to an efficient ray traversal procedure. In combination with a regular model subdivision, the most optimal rendering technique can be selected at run-time for each part. For very large triangle meshes our method can outperform pure rasterization and requires a considerably smaller memory budget on the GPU. Since the proposed data structure can be constructed from any renderable surface representation, it can also be used to efficiently render isosurfaces in scalar volume fields. The compactness of the data structure allows rendering from GPU memory when alternative techniques already require exhaustive paging.",
"title": ""
},
{
"docid": "e263a93a9d936a90f4e513ac65317541",
"text": "Neuronal power attenuation or enhancement in specific frequency bands over the sensorimotor cortex, called Event-Related Desynchronization (ERD) or Event-Related Synchronization (ERS), respectively, is a major phenomenon in brain activities involved in imaginary movement of body parts. However, it is known that the nature of motor imagery-related electroencephalogram (EEG) signals is non-stationary and highly timeand frequency-dependent spatial filter, which we call ‘non-homogeneous filter.’ We adaptively select bases of spatial filters over time and frequency. By taking both temporal and spectral features of EEGs in finding a spatial filter into account it is beneficial to be able to consider non-stationarity of EEG signals. In order to consider changes of ERD/ERS patterns over the time–frequency domain, we devise a spectrally and temporally weighted classification method via statistical analysis. Our experimental results on the BCI Competition IV dataset II-a and BCI Competition II dataset IV clearly presented the effectiveness of the proposed method outperforming other competing methods in the literature. & 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5e6990d8f1f81799e2e7fdfe29d14e4d",
"text": "Underwater wireless communications refer to data transmission in unguided water environment through wireless carriers, i.e., radio-frequency (RF) wave, acoustic wave, and optical wave. In comparison to RF and acoustic counterparts, underwater optical wireless communication (UOWC) can provide a much higher transmission bandwidth and much higher data rate. Therefore, we focus, in this paper, on the UOWC that employs optical wave as the transmission carrier. In recent years, many potential applications of UOWC systems have been proposed for environmental monitoring, offshore exploration, disaster precaution, and military operations. However, UOWC systems also suffer from severe absorption and scattering introduced by underwater channels. In order to overcome these technical barriers, several new system design approaches, which are different from the conventional terrestrial free-space optical communication, have been explored in recent years. We provide a comprehensive and exhaustive survey of the state-of-the-art UOWC research in three aspects: 1) channel characterization; 2) modulation; and 3) coding techniques, together with the practical implementations of UOWC.",
"title": ""
},
{
"docid": "73b0a5820c8268bb5911e1b44401273b",
"text": "In typical reinforcement learning (RL), the environment is assumed given and the goal of the learning is to identify an optimal policy for the agent taking actions through its interactions with the environment. In this paper, we extend this setting by considering the environment is not given, but controllable and learnable through its interaction with the agent at the same time. This extension is motivated by environment design scenarios in the realworld, including game design, shopping space design and traffic signal design. Theoretically, we find a dual Markov decision process (MDP) w.r.t. the environment to that w.r.t. the agent, and derive a policy gradient solution to optimizing the parametrized environment. Furthermore, discontinuous environments are addressed by a proposed general generative framework. Our experiments on a Maze game design task show the effectiveness of the proposed algorithms in generating diverse and challenging Mazes against various agent settings.",
"title": ""
},
{
"docid": "83b5da6ab8ab9a906717fda7aa66dccb",
"text": "Image quality assessment (IQA) tries to estimate human perception based image visual quality in an objective manner. Existing approaches target this problem with or without reference images. For no-reference image quality assessment, there is no given reference image or any knowledge of the distortion type of the image. Previous approaches measure the image quality from signal level rather than semantic analysis. They typically depend on various features to represent local characteristic of an image. In this paper we propose a new no-reference (NR) image quality assessment (IQA) framework based on semantic obviousness. We discover that semantic-level factors affect human perception of image quality. With such observation, we explore semantic obviousness as a metric to perceive objects of an image. We propose to extract two types of features, one to measure the semantic obviousness of the image and the other to discover local characteristic. Then the two kinds of features are combined for image quality estimation. The principles proposed in our approach can also be incorporated with many existing IQA algorithms to boost their performance. We evaluate our approach on the LIVE dataset. Our approach is demonstrated to be superior to the existing NR-IQA algorithms and comparable to the state-of-the-art full-reference IQA (FR-IQA) methods. Cross-dataset experiments show the generalization ability of our approach.",
"title": ""
},
{
"docid": "63115b12e4a8192fdce26eb7e2f8989a",
"text": "Theorems and techniques to form different types of transformationally invariant processing and to produce the same output quantitatively based on either transformationally invariant operators or symmetric operations have recently been introduced by the authors. In this study, we further propose to compose a geared rotationally identical CNN system (GRI-CNN) with a small angle increment by connecting networks of participated processes at the first flatten layer. Using an ordinary CNN structure as a base, requirements for constructing a GRI-CNN include the use of either symmetric input vector or kernels with an angle increment that can form a complete cycle as a \"gearwheel\". Four basic GRI-CNN structures were studied. Each of them can produce quantitatively identical output results when a rotation angle of the input vector is evenly divisible by the increment angle of the gear. Our study showed when a rotated input vector does not match to a gear angle, the GRI-CNN can also produce a highly consistent result. With an ultrafine increment angle (e.g., 1 or 0.1), a virtually isotropic CNN system can be constructed.",
"title": ""
},
{
"docid": "18bc45b89f56648176ab3e1b9658ec16",
"text": "In this experiment, it was defined a protocol of fluorescent probes combination: propidium iodide (PI), fluorescein isothiocyanate-conjugated Pisum sativum agglutinin (FITC-PSA), and JC-1. For this purpose, four ejaculates from three different rams (n=12), all showing motility 80% and abnormal morphology 10%, were diluted in TALP medium and split into two aliquots. One of the aliquots was flash frozen and thawed in three continuous cycles, to induce damage in cellular membranes and to disturb mitochondrial function. Three treatments were prepared with the following fixed ratios of fresh semen:flash frozen semen: 0:100 (T0), 50:50 (T50), and 100:0 (T100). Samples were stained in the proposal protocol and evaluated by epifluorescence microscopy. For plasmatic membrane integrity, detected by PI probe, it was obtained the equation: Ŷ=1.09+0.86X (R 2 =0.98). The intact acrosome, verified by the FITC-PSA probe, produced the equation: Ŷ=2.76+0.92X (R 2 =0.98). The high mitochondrial membrane potential, marked in red-orange by JC-1, was estimated by the equation: Ŷ=1.90+0.90X (R 2 =0.98). The resulting linear equations demonstrate that this technique is efficient and practical for the simultaneous evaluations of the plasmatic, acrosomal, and mitochondrial membranes in ram spermatozoa.",
"title": ""
},
{
"docid": "4c200a1452ef7951cdbad6fe98cc379a",
"text": "A key challenge for timeline summarization is to generate a concise, yet complete storyline from large collections of news stories. Previous studies in extractive timeline generation are limited in two ways: first, most prior work focuses on fully-observable ranking models or clustering models with hand-designed features that may not generalize well. Second, most summarization corpora are text-only, which means that text is the sole source of information considered in timeline summarization, and thus, the rich visual content from news images is ignored. To solve these issues, we leverage the success of matrix factorization techniques from recommender systems, and cast the problem as a sentence recommendation task, using a representation learning approach. To augment text-only corpora, for each candidate sentence in a news article, we take advantage of top-ranked relevant images from the Web and model the image using a convolutional neural network architecture. Finally, we propose a scalable low-rank approximation approach for learning joint embeddings of news stories and images. In experiments, we compare our model to various competitive baselines, and demonstrate the stateof-the-art performance of the proposed textbased and multimodal approaches.",
"title": ""
},
{
"docid": "111e34b97cbefb43dc611efe434bbc30",
"text": "This paper presents a model-checking approach for analyzing discrete-time Markov reward models. For this purpose, the temporal logic probabilistic CTL is extended with reward constraints. This allows to formulate complex measures – involving expected as well as accumulated rewards – in a precise and succinct way. Algorithms to efficiently analyze such formulae are introduced. The approach is illustrated by model-checking a probabilistic cost model of the IPv4 zeroconf protocol for distributed address assignment in ad-hoc networks.",
"title": ""
},
{
"docid": "9fb27226848da6b18fdc1e3b3edf79c9",
"text": "In the last few years thousands of scientific papers have investigated sentiment analysis, several startups that measure opinions on real data have emerged and a number of innovative products related to this theme have been developed. There are multiple methods for measuring sentiments, including lexical-based and supervised machine learning methods. Despite the vast interest on the theme and wide popularity of some methods, it is unclear which one is better for identifying the polarity (i.e., positive or negative) of a message. Accordingly, there is a strong need to conduct a thorough apple-to-apple comparison of sentiment analysis methods, as they are used in practice, across multiple datasets originated from different data sources. Such a comparison is key for understanding the potential limitations, advantages, and disadvantages of popular methods. This article aims at filling this gap by presenting a benchmark comparison of twenty-four popular sentiment analysis methods (which we call the state-of-the-practice methods). Our evaluation is based on a benchmark of eighteen labeled datasets, covering messages posted on social networks, movie and product reviews, as well as opinions and comments in news articles. Our results highlight the extent to which the prediction performance of these methods varies considerably across datasets. Aiming at boosting the development of this research area, we open the methods’ codes and datasets used in this article, deploying them in a benchmark system, which provides an open API for accessing and comparing sentence-level sentiment analysis methods.",
"title": ""
},
{
"docid": "6ca59955cbf84ab9dce3c444709040b7",
"text": "We introduce SPHINCS-Simpira, which is a variant of the SPHINCS signature scheme with Simpira as a building block. SPHINCS was proposed by Bernstein et al. at EUROCRYPT 2015 as a hash-based signature scheme with post-quantum security. At ASIACRYPT 2016, Gueron and Mouha introduced the Simpira family of cryptographic permutations, which delivers high throughput on modern 64-bit processors by using only one building block: the AES round function. The Simpira family claims security against structural distinguishers with a complexity up to 2 using classical computers. In this document, we explain why the same claim can be made against quantum computers as well. Although Simpira follows a very conservative design strategy, our benchmarks show that SPHINCS-Simpira provides a 1.5× speed-up for key generation, a 1.4× speed-up for signing 59-byte messages, and a 2.0× speed-up for verifying 59-byte messages compared to the originally proposed SPHINCS-256.",
"title": ""
},
{
"docid": "e016d5fc261def252f819f350b155c1a",
"text": "Risk reduction is one of the key objectives pursued by transport safety policies. Particularly, the formulation and implementation of transport safety policies needs the systematic assessment of the risks, the specification of residual risk targets and the monitoring of progresses towards those ones. Risk and safety have always been considered critical in civil aviation. The purpose of this paper is to describe and analyse safety aspects in civil airports. An increase in airport capacity usually involves changes to runways layout, route structures and traffic distribution, which in turn effect the risk level around the airport. For these reasons third party risk becomes an important issue in airports development. To avoid subjective interpretations and to increase model accuracy, risk information are colleted and evaluated in a rational and mathematical manner. The method may be used to draw risk contour maps so to provide a guide to local and national authorities, to population who live around the airport, and to airports operators. Key-Words: Risk Management, Risk assessment methodology, Safety Civil aviation.",
"title": ""
},
{
"docid": "45881ab3fc9b2d09f211808e8c9b0a3c",
"text": "Nowadays a large number of user-adaptive systems has been developed. Commonly, the effort to build user models is repeated across applications and domains, due to the lack of interoperability and synchronization among user-adaptive systems. There is a strong need for the next generation of user models to be interoperable, i.e. to be able to exchange user model portions and to use the information that has been exchanged to enrich the user experience. This paper presents an overview of the well-established literature dealing with user model interoperability, discussing the most representative work which has provided valuable solutions to face interoperability issues. Based on a detailed decomposition and a deep analysis of the selected work, we have isolated a set of dimensions characterizing the user model interoperability process along which the work has been classified. Starting from this analysis, the paper presents some open issues and possible future deployments in the area.",
"title": ""
},
{
"docid": "e291628468d30d72b75bcd407382c8eb",
"text": "In order to implement reliable digital system, it is becoming important making tests and finding bugs by setting up a verification environment. It is possible to set up effective verification environment by using Universal Verification Methodology which is standardized and used in worldwide chip industry. In this work, the slave circuit of Serial Peripheral Interface, which is commonly used for communication integrated circuits, have been designed with hardware description and verification language SystemVerilog and creating test environment with UVM.",
"title": ""
},
{
"docid": "7ad4f52279e85f8e20239e1ea6c85bbb",
"text": "One of the most exciting but challenging endeavors in music research is to develop a computational model that comprehends the affective content of music signals and organizes a music collection according to emotion. In this paper, we propose a novel acoustic emotion Gaussians (AEG) model that defines a proper generative process of emotion perception in music. As a generative model, AEG permits easy and straightforward interpretations of the model learning processes. To bridge the acoustic feature space and music emotion space, a set of latent feature classes, which are learned from data, is introduced to perform the end-to-end semantic mappings between the two spaces. Based on the space of latent feature classes, the AEG model is applicable to both automatic music emotion annotation and emotion-based music retrieval. To gain insights into the AEG model, we also provide illustrations of the model learning process. A comprehensive performance study is conducted to demonstrate the superior accuracy of AEG over its predecessors, using two emotion annotated music corpora MER60 and MTurk. Our results show that the AEG model outperforms the state-of-the-art methods in automatic music emotion annotation. Moreover, for the first time a quantitative evaluation of emotion-based music retrieval is reported.",
"title": ""
}
] |
scidocsrr
|
48e2653f4a59a0c4f889d6b75a1c41ff
|
Click chain model in web search
|
[
{
"docid": "71be2ab6be0ab5c017c09887126053e5",
"text": "One of the most important yet insufficiently studied issues in online advertising is the externality effect among ads: the value of an ad impression on a page is affected not just by the location that the ad is placed in, but also by the set of other ads displayed on the page. For instance, a high quality competing ad can detract users from another ad, while a low quality ad could cause the viewer to abandon the page",
"title": ""
}
] |
[
{
"docid": "7e4a485d489f9e9ce94889b52214c804",
"text": "A situated ontology is a world model used as a computational resource for solving a particular set of problems. It is treated as neither a \\natural\" entity waiting to be discovered nor a purely theoretical construct. This paper describes how a semantico-pragmatic analyzer, Mikrokosmos, uses knowledge from a situated ontology as well as from language-speciic knowledge sources (lexicons and microtheory rules). Also presented are some guidelines for acquiring ontological concepts and an overview of the technology developed in the Mikrokosmos project for large-scale acquisition and maintenance of ontological databases. Tools for acquiring, maintaining, and browsing ontologies can be shared more readily than ontologies themselves. Ontological knowledge bases can be shared as computational resources if such tools provide translators between diierent representation formats. 1 A Situated Ontology World models (ontologies) in computational applications are artiicially constructed entities. They are created, not discovered. This is why so many diierent world models were suggested. Many ontologies are developed for purely theoretical purposes or without the context of a practical situation (e. Many practical knowledge-based systems, on the other hand, employ world or domain models without recognizing them as a separate knowledge source (e.g., Farwell, et al. 1993). In the eld of natural language processing (NLP) there is now a consensus that all NLP systems that seek to represent and manipulate meanings of texts need an ontology (e. In our continued eeorts to build a multilingual knowledge-based machine translation (KBMT) system using an interlingual meaning representation (e.g., Onyshkevych and Nirenburg, 1994), we have developed an ontology to facilitate natural language interpretation and generation. The central goal of the Mikrokosmos project is to develop a system that produces a comprehensive Text Meaning Representation (TMR) for an input text in any of a set of source languages. 1 Knowledge that supports this process is stored both in language-speciic knowledge sources and in an independently motivated, language-neutral ontology (e. An ontology for NLP purposes is a body of knowledge about the world (or a domain) that a) is a repository of primitive symbols used in meaning representation; b) organizes these symbols in a tangled subsumption hierarchy; and c) further interconnects these symbols using a rich system of semantic and discourse-pragmatic relations deened among the concepts. In order for such an ontology to become a computational resource for solving problems such as ambiguity and reference resolution, it must be actually constructed, not merely deened formally, as is the …",
"title": ""
},
{
"docid": "7973587470f4e40f04288fb261445cac",
"text": "In developed countries, vitamin B12 (cobalamin) deficiency usually occurs in children, exclusively breastfed ones whose mothers are vegetarian, causing low body stores of vitamin B12. The haematologic manifestation of vitamin B12 deficiency is pernicious anaemia. It is a megaloblastic anaemia with high mean corpuscular volume and typical morphological features, such as hyperlobulation of the nuclei of the granulocytes. In advanced cases, neutropaenia and thrombocytopaenia can occur, simulating aplastic anaemia or leukaemia. In addition to haematological symptoms, infants may experience weakness, fatigue, failure to thrive, and irritability. Other common findings include pallor, glossitis, vomiting, diarrhoea, and icterus. Neurological symptoms may affect the central nervous system and, in severe cases, rarely cause brain atrophy. Here, we report an interesting case, a 12-month old infant, who was admitted with neurological symptoms and diagnosed with vitamin B12 deficiency.",
"title": ""
},
{
"docid": "effe9cf542849a0da41f984f7097228a",
"text": "We introduce FaceVR, a novel method for gaze-aware facial reenactment in the Virtual Reality (VR) context. The key component of FaceVR is a robust algorithm to perform real-time facial motion capture of an actor who is wearing a head-mounted display (HMD), as well as a new data-driven approach for eye tracking from monocular videos. In addition to these face reconstruction components, FaceVR incorporates photo-realistic re-rendering in real time, thus allowing artificial modifications of face and eye appearances. For instance, we can alter facial expressions, change gaze directions, or remove the VR goggles in realistic re-renderings. In a live setup with a source and a target actor, we apply these newly-introduced algorithmic components. We assume that the source actor is wearing a VR device, and we capture his facial expressions and eye movement in real-time. For the target video, we mimic a similar tracking process; however, we use the source input to drive the animations of the target video, thus enabling gaze-aware facial reenactment. To render the modified target video on a stereo display, we augment our capture and reconstruction process with stereo data. In the end, FaceVR produces compelling results for a variety of applications, such as gaze-aware facial reenactment, reenactment in virtual reality, removal of VR goggles, and re-targeting of somebody's gaze direction in a video conferencing call.",
"title": ""
},
{
"docid": "edba38e0515256fbb2e72fce87747472",
"text": "The risk of predation can have large effects on ecological communities via changes in prey behaviour, morphology and reproduction. Although prey can use a variety of sensory signals to detect predation risk, relatively little is known regarding the effects of predator acoustic cues on prey foraging behaviour. Here we show that an ecologically important marine crab species can detect sound across a range of frequencies, probably in response to particle acceleration. Further, crabs suppress their resource consumption in the presence of experimental acoustic stimuli from multiple predatory fish species, and the sign and strength of this response is similar to that elicited by water-borne chemical cues. When acoustic and chemical cues were combined, consumption differed from expectations based on independent cue effects, suggesting redundancies among cue types. These results highlight that predator acoustic cues may influence prey behaviour across a range of vertebrate and invertebrate taxa, with the potential for cascading effects on resource abundance.",
"title": ""
},
{
"docid": "52462bd444f44910c18b419475a6c235",
"text": "Snoring is a common symptom of serious chronic disease known as obstructive sleep apnea (OSA). Knowledge about the location of obstruction site (VVelum, OOropharyngeal lateral walls, T-Tongue, E-Epiglottis) in the upper airways is necessary for proper surgical treatment. In this paper we propose a dual source-filter model similar to the source-filter model of speech to approximate the generation process of snore audio. The first filter models the vocal tract from lungs to the point of obstruction with white noise excitation from the lungs. The second filter models the vocal tract from the obstruction point to the lips/nose with impulse train excitation which represents vibrations at the point of obstruction. The filter coefficients are estimated using the closed and open phases of the snore beat cycle. VOTE classification is done by using SVM classifier and filter coefficients as features. The classification experiments are performed on the development set (283 snore audios) of the MUNICH-PASSAU SNORE SOUND CORPUS (MPSSC). We obtain an unweighted average recall (UAR) of 49.58%, which is higher than the INTERSPEECH-2017 snoring sub-challenge baseline technique by ∼3% (absolute).",
"title": ""
},
{
"docid": "c250963a2b536a9ce9149f385f4d2a0f",
"text": "The systematic review (SR) is a methodology used to find and aggregate all relevant existing evidence about a specific research question of interest. One of the activities associated with the SR process is the selection of primary studies, which is a time consuming manual task. The quality of primary study selection impacts the overall quality of SR. The goal of this paper is to propose a strategy named “Score Citation Automatic Selection” (SCAS), to automate part of the primary study selection activity. The SCAS strategy combines two different features, content and citation relationships between the studies, to make the selection activity as automated as possible. Aiming to evaluate the feasibility of our strategy, we conducted an exploratory case study to compare the accuracy of selecting primary studies manually and using the SCAS strategy. The case study shows that for three SRs published in the literature and previously conducted in a manual implementation, the average effort reduction was 58.2 % when applying the SCAS strategy to automate part of the initial selection of primary studies, and the percentage error was 12.98 %. Our case study provided confidence in our strategy, and suggested that it can reduce the effort required to select the primary studies without adversely affecting the overall results of SR.",
"title": ""
},
{
"docid": "db4784e051b798dfa6c3efa5e84c4d00",
"text": "Purpose – The purpose of this paper is to propose and verify that the technology acceptance model (TAM) can be employed to explain and predict the acceptance of mobile learning (M-learning); an activity in which users access learning material with their mobile devices. The study identifies two factors that account for individual differences, i.e. perceived enjoyment (PE) and perceived mobility value (PMV), to enhance the explanatory power of the model. Design/methodology/approach – An online survey was conducted to collect data. A total of 313 undergraduate and graduate students in two Taiwan universities answered the questionnaire. Most of the constructs in the model were measured using existing scales, while some measurement items were created specifically for this research. Structural equation modeling was employed to examine the fit of the data with the model by using the LISREL software. Findings – The results of the data analysis shows that the data fit the extended TAM model well. Consumers hold positive attitudes for M-learning, viewing M-learning as an efficient tool. Specifically, the results show that individual differences have a great impact on user acceptance and that the perceived enjoyment and perceived mobility can predict user intentions of using M-learning. Originality/value – There is scant research available in the literature on user acceptance of M-learning from a customer’s perspective. The present research shows that TAM can predict user acceptance of this new technology. Perceived enjoyment and perceived mobility value are antecedents of user acceptance. The model enhances our understanding of consumer motivation of using M-learning. This understanding can aid our efforts when promoting M-learning.",
"title": ""
},
{
"docid": "996eb4470d33f00ed9cb9bcc52eb5d82",
"text": "Andrew is a distributed computing environment that is a synthesis of the personal computing and timesharing paradigms. When mature, it is expected to encompass over 5,000 workstations spanning the Carnegie Mellon University campus. This paper examines the security issues that arise in such an environment and describes the mechanisms that have been developed to address them. These mechanisms include the logical and physical separation of servers and clients, support for secure communication at the remote procedure call level, a distributed authentication service, a file-protection scheme that combines access lists with UNIX mode bits, and the use of encryption as a basic building block. The paper also discusses the assumptions underlying security in Andrew and analyzes the vulnerability of the system. Usage experience reveals that resource control, particularly of workstation CPU cycles, is more important than originally anticipated and that the mechanisms available to address this issue are rudimentary.",
"title": ""
},
{
"docid": "ba57246214ea44910e94471375836d87",
"text": "Collaborative filtering is a technique for recommending documents to users based on how similar their tastes are to other users. If two users tend to agree on what they like, the system will recommend the same documents to them. The generalized vector space model of information retrieval represents a document by a vector of its similarities to all other documents. The process of collaborative filtering is nearly identical to the process of retrieval using GVSM in a matrix of user ratings. Using this observation, a model for filtering collaboratively using document content is possible.",
"title": ""
},
{
"docid": "774bf4b0a2c8fe48607e020da2737041",
"text": "A class of three-dimensional planar arrays in substrate integrated waveguide (SIW) technology is proposed, designed and demonstrated with 8 × 16 elements at 35 GHz for millimeter-wave imaging radar system applications. Endfire element is generally chosen to ensure initial high gain and broadband characteristics for the array. Fermi-TSA (tapered slot antenna) structure is used as element to reduce the beamwidth. Corrugation is introduced to reduce the resulting antenna physical width without degradation of performance. The achieved measured gain in our demonstration is about 18.4 dBi. A taper shaped air gap in the center is created to reduce the coupling between two adjacent elements. An SIW H-to-E-plane vertical interconnect is proposed in this three-dimensional architecture and optimized to connect eight 1 × 16 planar array sheets to the 1 × 8 final network. The overall architecture is exclusively fabricated by the conventional PCB process. Thus, the developed SIW feeder leads to a significant reduction in both weight and cost, compared to the metallic waveguide-based counterpart. A complete antenna structure is designed and fabricated. The planar array ensures a gain of 27 dBi with low SLL of 26 dB and beamwidth as narrow as 5.15 degrees in the E-plane and 6.20 degrees in the 45°-plane.",
"title": ""
},
{
"docid": "e99369633599d38d84ad1a5c74695475",
"text": "Sarcasm is a form of language in which individual convey their message in an implicit way i.e. the opposite of what is implied. Sarcasm detection is the task of predicting sarcasm in text. This is the crucial step in sentiment analysis due to inherently ambiguous nature of sarcasm. With this ambiguity, sarcasm detection has always been a difficult task, even for humans. Therefore sarcasm detection has gained importance in many Natural Language Processing applications. In this paper, we describe approaches, issues, challenges and future scopes in sarcasm detection.",
"title": ""
},
{
"docid": "8877d6753d6b7cd39ba36c074ca56b00",
"text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.",
"title": ""
},
{
"docid": "8f1bcaed29644b80a623be8d26b81c20",
"text": "The GTZAN dataset appears in at least 100 published works, and is the most-used public dataset for evaluation in machine listening research for music genre recognition (MGR). Our recent work, however, shows GTZAN has several faults (repetitions, mislabelings, and distortions), which challenge the interpretability of any result derived using it. In this article, we disprove the claims that all MGR systems are affected in the same ways by these faults, and that the performances of MGR systems in GTZAN are still meaningfully comparable since they all face the same faults. We identify and analyze the contents of GTZAN, and provide a catalog of its faults. We review how GTZAN has been used in MGR research, and find few indications that its faults have been known and considered. Finally, we rigorously study the effects of its faults on evaluating five different MGR systems. The lesson is not to banish GTZAN, but to use it with consideration of its contents.",
"title": ""
},
{
"docid": "07575ce75d921d6af72674e1fe563ff7",
"text": "With a growing body of literature linking systems of high-performance work practices to organizational performance outcomes, recent research has pushed for examinations of the underlying mechanisms that enable this connection. In this study, based on a large sample of Welsh public-sector employees, we explored the role of several individual-level attitudinal factors--job satisfaction, organizational commitment, and psychological empowerment--as well as organizational citizenship behaviors that have the potential to provide insights into how human resource systems influence the performance of organizational units. The results support a unit-level path model, such that department-level, high-performance work system utilization is associated with enhanced levels of job satisfaction, organizational commitment, and psychological empowerment. In turn, these attitudinal variables were found to be positively linked to enhanced organizational citizenship behaviors, which are further related to a second-order construct measuring departmental performance.",
"title": ""
},
{
"docid": "0297af005c837e410272ab3152942f90",
"text": "Iris authentication is a popular method where persons are accurately authenticated. During authentication phase the features are extracted which are unique. Iris authentication uses IR images for authentication. This proposed work uses color iris images for authentication. Experiments are performed using ten different color models. This paper is focused on performance evaluation of color models used for color iris authentication. This proposed method is more reliable which cope up with different noises of color iris images. The experiments reveals the best selection of color model used for iris authentication. The proposed method is validated on UBIRIS noisy iris database. The results demonstrate that the accuracy is 92.1%, equal error rate of 0.072 and computational time is 0.039 seconds.",
"title": ""
},
{
"docid": "e1c927d7fbe826b741433c99fff868d0",
"text": "Multiclass maps are scatterplots, multidimensional projections, or thematic geographic maps where data points have a categorical attribute in addition to two quantitative attributes. This categorical attribute is often rendered using shape or color, which does not scale when overplotting occurs. When the number of data points increases, multiclass maps must resort to data aggregation to remain readable. We present multiclass density maps: multiple 2D histograms computed for each of the category values. Multiclass density maps are meant as a building block to improve the expressiveness and scalability of multiclass map visualization. In this article, we first present a short survey of aggregated multiclass maps, mainly from cartography. We then introduce a declarative model—a simple yet expressive JSON grammar associated with visual semantics—that specifies a wide design space of visualizations for multiclass density maps. Our declarative model is expressive and can be efficiently implemented in visualization front-ends such as modern web browsers. Furthermore, it can be reconfigured dynamically to support data exploration tasks without recomputing the raw data. Finally, we demonstrate how our model can be used to reproduce examples from the past and support exploring data at scale.",
"title": ""
},
{
"docid": "0c8b192807a6728be21e6a19902393c0",
"text": "The balance between facilitation and competition is likely to change with age due to the dynamic nature of nutrient, water and carbon cycles, and light availability during stand development. These processes have received attention in harsh, arid, semiarid and alpine ecosystems but are rarely examined in more productive communities, in mixed-species forest ecosystems or in long-term experiments spanning more than a decade. The aim of this study was to examine how inter- and intraspecific interactions between Eucalyptus globulus Labill. mixed with Acacia mearnsii de Wildeman trees changed with age and productivity in a field experiment in temperate south-eastern Australia. Spatially explicit neighbourhood indices were calculated to quantify tree interactions and used to develop growth models to examine how the tree interactions changed with time and stand productivity. Interspecific influences were usually less negative than intraspecific influences, and their difference increased with time for E. globulus and decreased with time for A. mearnsii. As a result, the growth advantages of being in a mixture increased with time for E. globulus and decreased with time for A. mearnsii. The growth advantage of being in a mixture also decreased for E. globulus with increasing stand productivity, showing that spatial as well as temporal dynamics in resource availability influenced the magnitude and direction of plant interactions.",
"title": ""
},
{
"docid": "41ac115647c421c44d7ef1600814dc3e",
"text": "PURPOSE\nThe bony skeleton serves as the scaffolding for the soft tissues of the face; however, age-related changes of bony morphology are not well defined. This study sought to compare the anatomic relationships of the facial skeleton and soft tissue structures between young and old men and women.\n\n\nMETHODS\nA retrospective review of CT scans of 100 consecutive patients imaged at Duke University Medical Center between 2004 and 2007 was performed using the Vitrea software package. The study population included 25 younger women (aged 18-30 years), 25 younger men, 25 older women (aged 55-65 years), and 25 older men. Using a standardized reference line, the distances from the anterior corneal plane to the superior orbital rim, lateral orbital rim, lower eyelid fat pad, inferior orbital rim, anterior cheek mass, and pyriform aperture were measured. Three-dimensional bony reconstructions were used to record the angular measurements of 4 bony regions: glabellar, orbital, maxillary, and pyriform aperture.\n\n\nRESULTS\nThe glabellar (p = 0.02), orbital (p = 0.0007), maxillary (p = 0.0001), and pyriform (p = 0.008) angles all decreased with age. The maxillary pyriform (p = 0.003) and infraorbital rim (p = 0.02) regressed with age. Anterior cheek mass became less prominent with age (p = 0.001), but the lower eyelid fat pad migrated anteriorly over time (p = 0.007).\n\n\nCONCLUSIONS\nThe facial skeleton appears to remodel throughout adulthood. Relative to the globe, the facial skeleton appears to rotate such that the frontal bone moves anteriorly and inferiorly while the maxilla moves posteriorly and superiorly. This rotation causes bony angles to become more acute and likely has an effect on the position of overlying soft tissues. These changes appear to be more dramatic in women.",
"title": ""
},
{
"docid": "041ca42d50e4cac92cf81c989a8527fb",
"text": "Helix antenna consists of a single conductor or multi-conductor open helix-shaped. Helix antenna has a three-dimensional shape. The shape of the helix antenna resembles a spring and the diameter and the distance between the windings of a certain size. This study aimed to design a signal amplifier wifi on 2.4 GHz. Materials used in the form of the pipe, copper wire, various connectors and wireless adapters and various other components. Mmmanagal describing simulation result on helix antenna. Further tested with wirelesmon software to test the wifi signal strength. The results are based Mmanagal, radiation patterns emitted achieve Ganin: 4.5 dBi horizontal polarization, F / B: −0,41dB; rear azimuth 1200 elevation 600, 2400 MHz, R27.9 and jX impedance −430.9, Elev: 64.40 real GND: 0.50 m height, and wifi signal strength increased from 47% to 55%.",
"title": ""
},
{
"docid": "857a2098e5eb48340699c6b7a29ec293",
"text": "Compressibiity of individuai sequences by the ciam of generaihd finite-atate information-losales encoders ia investigated These encodersrpnoperateinavariabie-ratemodeasweUasaflxedrateone,nnd they aiiow for any fhite-atate acheme of variabie-iength-to-variable-ien@ coding. For every individuai hfiite aeqence x a quantity p (x) ia defined, calledthecompressibilityofx,whirhisshowntobetheasymptotieatly attainable lower bound on the compression ratio tbat cao be achieved for x by any finite-state encoder. ‘flds is demonstrated by means of a amatructivecodtngtbeoremanditsconversethat,apartfnnntheirafymptotic significance, also provide useful performance criteria for finite and practicai data-compression taaka. The proposed concept of compressibility ia aiao shown to play a role analogous to that of entropy in ciaasicai informatfon theory where onedeaia with probabilistic ensembles of aequencea ratk Manuscript received June 10, 1977; revised February 20, 1978. J. Ziv is with Bell Laboratories, Murray Hill, NJ 07974, on leave from the Department of Electrical Engineering, Techmon-Israel Institute of Technology, Halfa, Israel. A. Lempel is with Sperry Research Center, Sudbury, MA 01776, on leave from the Department of Electrical Engineer@, Technion-Israel Institute of Technology, Haifa, Israel. tium with individuai sequences. Widie the delinition of p (x) aiiows a different machine for each different sequence to be compresse4 the constructive coding theorem ieada to a universal algorithm that is aaymik toticaiiy optfmai for au sequencea.",
"title": ""
}
] |
scidocsrr
|
2dfb5e06121079ab4320b8a496e4d45e
|
Dialog state tracking, a machine reading approach using Memory Network
|
[
{
"docid": "bf294a4c3af59162b2f401e2cdcb060b",
"text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.",
"title": ""
},
{
"docid": "3d22f5be70237ae0ee1a0a1b52330bfa",
"text": "Tracking the user's intention throughout the course of a dialog, called dialog state tracking, is an important component of any dialog system. Most existing spoken dialog systems are designed to work in a static, well-defined domain, and are not well suited to tasks in which the domain may change or be extended over time. This paper shows how recurrent neural networks can be effectively applied to tracking in an extended domain with new slots and values not present in training data. The method is evaluated in the third Dialog State Tracking Challenge, where it significantly outperforms other approaches in the task of tracking the user's goal. A method for online unsupervised adaptation to new domains is also presented. Unsupervised adaptation is shown to be helpful in improving word-based recurrent neural networks, which work directly from the speech recognition results. Word-based dialog state tracking is attractive as it does not require engineering a spoken language understanding system for use in the new domain and it avoids the need for a general purpose intermediate semantic representation.",
"title": ""
}
] |
[
{
"docid": "b2418dc7ae9659d643a74ba5c0be2853",
"text": "MITJA D. BACK*, LARS PENKE, STEFAN C. SCHMUKLE, KAROLINE SACHSE, PETER BORKENAU and JENS B. ASENDORPF Department of Psychology, Johannes Gutenberg-University Mainz, Germany Department of Psychology and Centre for Cognitive Ageing and Cognitive Epidemiology, University of Edinburgh, UK Department of Psychology, Westfälische Wilhelms-University Münster, Germany Department of Psychology, Martin-Luther University Halle-Wittenberg, Germany Department of Psychology, Martin-Luther University Halle-Wittenberg, Germany Department of Psychology, Humboldt University Berlin, Germany",
"title": ""
},
{
"docid": "05a77d687230dc28697ca1751586f660",
"text": "In recent years, there has been a huge increase in the number of bots online, varying from Web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content-editing bots in online collaboration communities. The online world has turned into an ecosystem of bots. However, our knowledge of how these automated agents are interacting with each other is rather poor. Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, and sociality and it is hence natural to expect interactions between bots to be relatively predictable and uneventful. In this article, we analyze the interactions between bots that edit articles on Wikipedia. We track the extent to which bots undid each other's edits over the period 2001-2010, model how pairs of bots interact over time, and identify different types of interaction trajectories. We find that, although Wikipedia bots are intended to support the encyclopedia, they often undo each other's edits and these sterile \"fights\" may sometimes continue for years. Unlike humans on Wikipedia, bots' interactions tend to occur over longer periods of time and to be more reciprocated. Yet, just like humans, bots in different cultural environments may behave differently. Our research suggests that even relatively \"dumb\" bots may give rise to complex interactions, and this carries important implications for Artificial Intelligence research. Understanding what affects bot-bot interactions is crucial for managing social media well, providing adequate cyber-security, and designing well functioning autonomous vehicles.",
"title": ""
},
{
"docid": "854b2bfdef719879a437f2d87519d8e8",
"text": "The morality of transformational leadership has been sharply questioned, particularly by libertarians, “grass roots” theorists, and organizational development consultants. This paper argues that to be truly transformational, leadership must be grounded in moral foundations. The four components of authentic transformational leadership (idealized influence, inspirational motivation, intellectual stimulation, and individualized consideration) are contrasted with their counterfeits in dissembling pseudo-transformational leadership on the basis of (1) the moral character of the leaders and their concerns for self and others; (2) the ethical values embedded in the leaders’ vision, articulation, and program, which followers can embrace or reject; and (3) the morality of the processes of social ethical choices and action in which the leaders and followers engage and collectively pursue. The literature on transformational leadership is linked to the long-standing literature on virtue and moral character, as exemplified by Socratic and Confucian typologies. It is related as well to the major themes of the modern Western ethical agenda: liberty, utility, and distributive justice Deception, sophistry, and pretense are examined alongside issues of transcendence, agency, trust, striving for congruence in values, cooperative action, power, persuasion, and corporate governance to establish the strategic and moral foundations of authentic transformational leadership.",
"title": ""
},
{
"docid": "2493570aa0a224722a07e81c9aab55cd",
"text": "A Smart Tailor Platform is proposed as a venue to integrate various players in garment industry, such as tailors, designers, customers, and other relevant stakeholders to automate its business processes. In, Malaysia, currently the processes are conducted manually which consume too much time in fulfilling its supply and demand for the industry. To facilitate this process, a study was conducted to understand the main components of the business operation. The components will be represented using a strategic management tool namely the Business Model Canvas (BMC). The inception phase of the Rational Unified Process (RUP) was employed to construct the BMC. The phase began by determining the basic idea and structure of the business process. The information gathered was classified into nine related dimensions and documented in accordance with the BMC. The generated BMC depicts the relationship of all the nine dimensions for the garment industry, and thus represents an integrated business model of smart tailor. This smart platform allows the players in the industry to promote, manage and fulfill supply and demands of their product electronically. In addition, the BMC can be used to assist developers in designing and developing the smart tailor platform.",
"title": ""
},
{
"docid": "1dbed92fbed2293b1da3080c306709e2",
"text": "The large number of peer-to-peer file-sharing applications can be subdivided in three basic categories: having a mediated, pure or hybrid architecture. This paper details each of these and indicates their respective strengths and weaknesses. In addition to this theoretical study, a number of practical experiments were conducted, with special attention for three popular applications, representative of each of the three architectures. Although a number of measurement studies have been done in the past ([1], [3], etc.) these all investigate only a fraction of the available applications and architectures, with very little focus on the bigger picture and to the recent evolutions in peer-to-peer architectures.",
"title": ""
},
{
"docid": "e9059b28b268c4dfc091e63c7419f95b",
"text": "Internet advertising is one of the most popular online business models. JavaScript-based advertisements (ads) are often directly embedded in a web publisher's page to display ads relevant to users (e.g., by checking the user's browser environment and page content). However, as third-party code, the ads pose a significant threat to user privacy. Worse, malicious ads can exploit browser vulnerabilities to compromise users' machines and install malware. To protect users from these threats, we propose AdSentry, a comprehensive confinement solution for JavaScript-based advertisements. The crux of our approach is to use a shadow JavaScript engine to sandbox untrusted ads. In addition, AdSentry enables flexible regulation on ad script behaviors by completely mediating its access to the web page (including its DOM) without limiting the JavaScript functionality exposed to the ads. Our solution allows both web publishers and end users to specify access control policies to confine ads' behaviors. We have implemented a proof-of-concept prototype of AdSentry that transparently supports the Mozilla Firefox browser. Our experiments with a number of ads-related attacks successfully demonstrate its practicality and effectiveness. The performance measurement indicates that our system incurs a small performance overhead.",
"title": ""
},
{
"docid": "c4421784554095ffed1365b3ba41bdc0",
"text": "Mood classification of music is an emerging domain of music information retrieval. In the approach presented here features extracted from an audio file are used in combination with the affective value of song lyrics to map a song onto a psychologically based emotion space. The motivation behind this system is the lack of intuitive and contextually aware playlist generation tools available to music listeners. The need for such tools is made obvious by the fact that digital music libraries are constantly expanding, thus making it increasingly difficult to recall a particular song in the library or to create a playlist for a specific event. By combining audio content information with context-aware data, such as song lyrics, this system allows the listener to automatically generate a playlist to suit their current activity or mood. Thesis Supervisor: Barry Vercoe Title: Professor of Media Arts and Sciences, Program in Media Arts and Sciences",
"title": ""
},
{
"docid": "9a1505d126d1120ffa8d9670c71cb076",
"text": "A relevant knowledge [24] (and consequently research area) is the study of software lifecycle process models (PM-SDLCs). Such process models have been defined in three abstraction levels: (i) full organizational software lifecycles process models (e.g. ISO 12207, ISO 15504, CMMI/SW); (ii) lifecycles frameworks models (e.g. waterfall, spiral, RAD, and others) and (iii) detailed software development life cycles process (e.g. unified process, TSP, MBASE, and others). This paper focuses on (ii) and (iii) levels and reports the results of a descriptive/comparative study of 13 PM-SDLCs that permits a plausible explanation of their evolution in terms of common, distinctive, and unique elements as well as of the specification rigor and agility attributes. For it, a conceptual research approach and a software process lifecycle meta-model are used. Findings from the conceptual analysis are reported. Paper ends with the description of research limitations and recommendations for further research.",
"title": ""
},
{
"docid": "4bec71105c8dca3d0b48e99cdd4e809a",
"text": "Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.",
"title": ""
},
{
"docid": "73be9211451c3051d817112dc94df86c",
"text": "Shear wave elasticity imaging (SWEI) is a new approach to imaging and characterizing tissue structures based on the use of shear acoustic waves remotely induced by the radiation force of a focused ultrasonic beam. SWEI provides the physician with a virtual \"finger\" to probe the elasticity of the internal regions of the body. In SWEI, compared to other approaches in elasticity imaging, the induced strain in the tissue can be highly localized, because the remotely induced shear waves are attenuated fully within a very limited area of tissue in the vicinity of the focal point of a focused ultrasound beam. SWEI may add a new quality to conventional ultrasonic imaging or magnetic resonance imaging. Adding shear elasticity data (\"palpation information\") by superimposing color-coded elasticity data over ultrasonic or magnetic resonance images may enable better differentiation of tissues and further enhance diagnosis. This article presents a physical and mathematical basis of SWEI with some experimental results of pilot studies proving feasibility of this new ultrasonic technology. A theoretical model of shear oscillations in soft biological tissue remotely induced by the radiation force of focused ultrasound is described. Experimental studies based on optical and magnetic resonance imaging detection of these shear waves are presented. Recorded spatial and temporal profiles of propagating shear waves fully confirm the results of mathematical modeling. Finally, the safety of the SWEI method is discussed, and it is shown that typical ultrasonic exposure of SWEI is significantly below the threshold of damaging effects of focused ultrasound.",
"title": ""
},
{
"docid": "53ac28d19b9f3c8f68e12016e9cfabbc",
"text": "Despite surveillance systems becoming increasingly ubiquitous in our living environment, automated surveillance, currently based on video sensory modality and machine intelligence, lacks most of the time the robustness and reliability required in several real applications. To tackle this issue, audio sensory devices have been incorporated, both alone or in combination with video, giving birth in the past decade, to a considerable amount of research. In this article, audio-based automated surveillance methods are organized into a comprehensive survey: A general taxonomy, inspired by the more widespread video surveillance field, is proposed to systematically describe the methods covering background subtraction, event classification, object tracking, and situation analysis. For each of these tasks, all the significant works are reviewed, detailing their pros and cons and the context for which they have been proposed. Moreover, a specific section is devoted to audio features, discussing their expressiveness and their employment in the above-described tasks. Differing from other surveys on audio processing and analysis, the present one is specifically targeted to automated surveillance, highlighting the target applications of each described method and providing the reader with a systematic and schematic view useful for retrieving the most suited algorithms for each specific requirement.",
"title": ""
},
{
"docid": "4457c0b480ec9f3d503aa89c6bbf03b9",
"text": "An output-capacitorless low-dropout regulator (LDO) with a direct voltage-spike detection circuit is presented in this paper. The proposed voltage-spike detection is based on capacitive coupling. The detection circuit makes use of the rapid transient voltage at the LDO output to increase the bias current momentarily. Hence, the transient response of the LDO is significantly enhanced due to the improvement of the slew rate at the gate of the power transistor. The proposed voltage-spike detection circuit is applied to an output-capacitorless LDO implemented in a standard 0.35-¿m CMOS technology (where VTHN ¿ 0.5 V and VTHP ¿ -0.65 V). Experimental results show that the LDO consumes 19 ¿A only. It regulates the output at 0.8 V from a 1-V supply, with dropout voltage of 200 mV at the maximum output current of 66.7 mA. The voltage spike and the recovery time of the LDO with the proposed voltage-spike detection circuit are reduced to about 70 mV and 3 ¿s, respectively, whereas they are more than 420 mV and 30 ¿s for the LDO without the proposed detection circuit.",
"title": ""
},
{
"docid": "2f8a6dcaeea91ef5034908b5bab6d8d3",
"text": "Web-based social systems enable new community-based opportunities for participants to engage, share, and interact. This community value and related services like search and advertising are threatened by spammers, content polluters, and malware disseminators. In an effort to preserve community value and ensure longterm success, we propose and evaluate a honeypot-based approach for uncovering social spammers in online social systems. Two of the key components of the proposed approach are: (1) The deployment of social honeypots for harvesting deceptive spam profiles from social networking communities; and (2) Statistical analysis of the properties of these spam profiles for creating spam classifiers to actively filter out existing and new spammers. We describe the conceptual framework and design considerations of the proposed approach, and we present concrete observations from the deployment of social honeypots in MySpace and Twitter. We find that the deployed social honeypots identify social spammers with low false positive rates and that the harvested spam data contains signals that are strongly correlated with observable profile features (e.g., content, friend information, posting patterns, etc.). Based on these profile features, we develop machine learning based classifiers for identifying previously unknown spammers with high precision and a low rate of false positives.",
"title": ""
},
{
"docid": "e777794833a060f99e11675952cd3342",
"text": "In this paper we propose a novel method to utilize the skeletal structure not only for supporting force but for releasing heat by latent heat.",
"title": ""
},
{
"docid": "f1c1a0baa9f96d841d23e76b2b00a68d",
"text": "Introduction to Derivative-Free Optimization Andrew R. Conn, Katya Scheinberg, and Luis N. Vicente The absence of derivatives, often combined with the presence of noise or lack of smoothness, is a major challenge for optimization. This book explains how sampling and model techniques are used in derivative-free methods and how these methods are designed to efficiently and rigorously solve optimization problems. Although readily accessible to readers with a modest background in computational mathematics, it is also intended to be of interest to researchers in the field. 2009 · xii + 277 pages · Softcover · ISBN 978-0-898716-68-9 List Price $73.00 · RUNDBRIEF Price $51.10 · Code MP08",
"title": ""
},
{
"docid": "823680e9de11eb0fe987d2b6827f4665",
"text": "Narcissism has been a perennial topic for psychoanalytic papers since Freud's 'On narcissism: An introduction' (1914). The understanding of this field has recently been greatly furthered by the analytical writings of Kernberg and Kohut despite, or perhaps because of, their glaring disagreements. Despite such theoretical advances, clinical theory has far outpaced clinical practice. This paper provides a clarification of the characteristics, diagnosis and development of the narcissistic personality disorder and draws out the differing treatment implications, at various levels of psychological intensity, of the two theories discussed.",
"title": ""
},
{
"docid": "abb06d560266ca1695f72e4d908cf6ea",
"text": "A simple photovoltaic (PV) system capable of operating in grid-connected mode and using multilevel boost converter (MBC) and line commutated inverter (LCI) has been developed for extracting the maximum power and feeding it to a single phase utility grid with harmonic reduction. Theoretical analysis of the proposed system is done and the duty ratio of the MBC is estimated for extracting maximum power from PV array. For a fixed firing angle of LCI, the proposed system is able to track the maximum power with the determined duty ratio which remains the same for all irradiations. This is the major advantage of the proposed system which eliminates the use of a separate maximum power point tracking (MPPT) Experiments have been conducted for feeding a single phase voltage to the grid. So by proper and simplified technique we are reducing the harmonics in the grid for unbalanced loads.",
"title": ""
},
{
"docid": "a3cb3e28db4e44642ecdac8eb4ae9a8a",
"text": "A Ka-band highly linear power amplifier (PA) is implemented in 28-nm bulk CMOS technology. Using a deep class-AB PA topology with appropriate harmonic control circuit, highly linear and efficient PAs are designed at millimeter-wave band. This PA architecture provides a linear PA operation close to the saturated power. Also elaborated harmonic tuning and neutralization techniques are used to further improve the transistor gain and stability. A two-stack PA is designed for higher gain and output power than a common source (CS) PA. Additionally, average power tracking (APT) is applied to further reduce the power consumption at a low power operation and, hence, extend battery life. Both the PAs are tested with two different signals at 28.5 GHz; they are fully loaded long-term evolution (LTE) signal with 16-quadrature amplitude modulation (QAM), a 7.5-dB peakto-average power ratio (PAPR), and a 20-MHz bandwidth (BW), and a wireless LAN (WLAN) signal with 64-QAM, a 10.8-dB PAPR, and an 80-MHz BW. The CS/two-stack PAs achieve power-added efficiency (PAE) of 27%/25%, error vector magnitude (EVM) of 5.17%/3.19%, and adjacent channel leakage ratio (ACLRE-UTRA) of -33/-33 dBc, respectively, with an average output power of 11/14.6 dBm for the LTE signal. For the WLAN signal, the CS/2-stack PAs achieve the PAE of 16.5%/17.3%, and an EVM of 4.27%/4.21%, respectively, at an average output power of 6.8/11 dBm.",
"title": ""
}
] |
scidocsrr
|
14d14d26379bbfd49b3336bbb81eade6
|
Understanding compliance with internet use policy from the perspective of rational choice theory
|
[
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] |
[
{
"docid": "16fa16d1d27b5800c119a300f4c4a79c",
"text": "Deep learning recently shows strong competitiveness to improve polar code decoding. However, suffering from prohibitive training and computation complexity, the conventional deep neural network (DNN) is only possible for very short code length. In this paper, the main problems of deep learning in decoding are well solved. We first present the multiple scaled belief propagation (BP) algorithm, aiming at obtaining faster convergence and better performance. Based on this, deep neural network decoder (NND) with low complexity and latency, is proposed for any code length. The training only requires a small set of zero codewords. Besides, its computation complexity is close to the original BP. Experiment results show that the proposed (64,32) NND with 5 iterations achieves even lower bit error rate (BER) than the 30-iteration conventional BP and (512, 256) NND also outperforms conventional BP decoder with same iterations. The hardware architecture of basic computation block is given and folding technique is also considered, saving about 50% hardware cost.",
"title": ""
},
{
"docid": "5077d46e909db94b510da0621fcf3a9e",
"text": "This paper presents a high SNR self-capacitance sensing 3D hover sensor that does not use panel offset cancelation blocks. Not only reducing noise components, but increasing the signal components together, this paper achieved a high SNR performance while consuming very low power and die-area. Thanks to the proposed separated structure between driving and sensing circuits of the self-capacitance sensing scheme (SCSS), the signal components are increased without using high-voltage MOS sensing amplifiers which consume big die-area and power and badly degrade SNR. In addition, since a huge panel offset problem in SCSS is solved exploiting the panel's natural characteristics, other costly resources are not required. Furthermore, display noise and parasitic capacitance mismatch errors are compressed. We demonstrate a 39dB SNR at a 1cm hover point under 240Hz scan rate condition with noise experiments, while consuming 183uW/electrode and 0.73mm2/sensor, which are the power per electrode and the die-area per sensor, respectively.",
"title": ""
},
{
"docid": "0cfa7e43d557fa6d68349b637367341a",
"text": "This paper proposes a model of selective attention for visual search tasks, based on a framework for sequential decision-making. The model is implemented using a fixed pan-tilt-zoom camera in a visually cluttered lab environment, which samples the environment at discrete time steps. The agent has to decide where to fixate next based purely on visual information, in order to reach the region where a target object is most likely to be found. The model consists of two interacting modules. A reinforcement learning module learns a policy on a set of regions in the room for reaching the target object, using as objective function the expected value of the sum of discounted rewards. By selecting an appropriate gaze direction at each step, this module provides top-down control in the selection of the next fixation point. The second module performs “within fixation” processing, based exclusively on visual information. Its purpose is twofold: to provide the agent with a set of locations of interest in the current image, and to perform the detection and identification of the target object. Detailed experimental results show that the number of saccades to a target object significantly decreases with the number of training epochs. The results also show the learned policy to find the target object is invariant to small physical displacements as well as object inversion.",
"title": ""
},
{
"docid": "1c34abb0e212034a5fb96771499f1ee3",
"text": "Facial expression recognition is a useful feature in modern human computer interaction (HCI). In order to build efficient and reliable recognition systems, face detection, feature extraction and classification have to be robustly realised. Addressing the latter two issues, this work proposes a new method based on geometric and transient optical flow features and illustrates their comparison and integration for facial expression recognition. In the authors’ method, photogrammetric techniques are used to extract three-dimensional (3-D) features from every image frame, which is regarded as a geometric feature vector. Additionally, optical flow-based motion detection is carried out between consecutive images, what leads to the transient features. Artificial neural network and support vector machine classification results demonstrate the high performance of the proposed method. In particular, through the use of 3-D normalisation and colour information, the proposed method achieves an advanced feature representation for the accurate and robust classification of facial expressions.",
"title": ""
},
{
"docid": "c5958b1ef21663b89e3823e9c33dc316",
"text": "The so-called “phishing” attacks are one of the important threats to individuals and corporations in today’s Internet. Combatting phishing is thus a top-priority, and has been the focus of much work, both on the academic and on the industry sides. In this paper, we look at this problem from a new angle. We have monitored a total of 19,066 phishing attacks over a period of ten months and found that over 90% of these attacks were actually replicas or variations of other attacks in the database. This provides several opportunities and insights for the fight against phishing: first, quickly and efficiently detecting replicas is a very effective prevention tool. We detail one such tool in this paper. Second, the widely held belief that phishing attacks are dealt with promptly is but an illusion. We have recorded numerous attacks that stay active throughout our observation period. This shows that the current prevention techniques are ineffective and need to be overhauled. We provide some suggestions in this direction. Third, our observation give a new perspective into the modus operandi of attackers. In particular, some of our observations suggest that a small group of attackers could be behind a large part of the current attacks. Taking down that group could potentially have a large impact on the phishing attacks observed today.",
"title": ""
},
{
"docid": "20e6ffb912ee0291d53e7a2750d1b426",
"text": "This paper considers a number of selection schemes commonly used in modern genetic algorithms. Specifically, proportionate reproduction, ranking selection, tournament selection, and Genitor (or «steady state\") selection are compared on the basis of solutions to deterministic difference or differential equations, which are verified through computer simulations. The analysis provides convenient approximate or exact solutions as well as useful convergence time and growth ratio estimates. The paper recommends practical application of the analyses and suggests a number of paths for more detailed analytical investigation of selection techniques.",
"title": ""
},
{
"docid": "09985252933e82cf1615dabcf1e6d9a2",
"text": "Facial landmark detection plays a very important role in many facial analysis applications such as identity recognition, facial expression analysis, facial animation, 3D face reconstruction as well as facial beautification. With the recent advance of deep learning, the performance of facial landmark detection, including on unconstrained inthe-wild dataset, has seen considerable improvement. This paper presents a survey of deep facial landmark detection for 2D images and video. A comparative analysis of different face alignment approaches is provided as well as some future research directions.",
"title": ""
},
{
"docid": "40a7682aabed0e1e5a27118cc70939c9",
"text": "Design and results of a 77 GHz FM/CW radar sensor based on a simple waveguide circuitry and a novel type of printed, low-profile, and low-loss antenna are presented. A Gunn VCO and a finline mixer act as transmitter and receiver, connected by two E-plane couplers. The folded reflector type antenna consists of a printed slot array and another planar substrate which, at the same time, provide twisting of the polarization and focussing of the incident wave. In this way, a folded low-profile, low-loss antenna can be realized. The performance of the radar is described, together with first results on a scanning of the antenna beam.",
"title": ""
},
{
"docid": "b6473fa890eba0fd00ac7e1999ae6fef",
"text": "Memristors have extended their influence beyond memory to logic and in-memory computing. Memristive logic design, the methodology of designing logic circuits using memristors, is an emerging concept whose growth is fueled by the quest for energy efficient computing systems. As a result, many memristive logic families have evolved with different attributes, and a mature comparison among them is needed to judge their merit. This paper presents a framework for comparing logic families by classifying them on the basis of fundamental properties such as statefulness, proximity (from the memory array), and flexibility of computation. We propose metrics to compare memristive logic families using analytic expressions for performance (latency), energy efficiency, and area. Then, we provide guidelines for a holistic comparison of logic families and set the stage for the evolution of new logic families.",
"title": ""
},
{
"docid": "3113012fdaa154e457e81515b755c041",
"text": "Knowledge bases (KB), both automatically and manually constructed, are often incomplete — many valid facts can be inferred from the KB by synthesizing existing information. A popular approach to KB completion is to infer new relations by combinatory reasoning over the information found along other paths connecting a pair of entities. Given the enormous size of KBs and the exponential number of paths, previous path-based models have considered only the problem of predicting a missing relation given two entities, or evaluating the truth of a proposed triple. Additionally, these methods have traditionally used random paths between fixed entity pairs or more recently learned to pick paths between them. We propose a new algorithm, MINERVA1, which addresses the much more difficult and practical task of answering questions where the relation is known, but only one entity. Since random walks are impractical in a setting with combinatorially many destinations from a start node, we present a neural reinforcement learning approach which learns how to navigate the graph conditioned on the input query to find predictive paths. Empirically, this approach obtains state-of-the-art results on several datasets, significantly outperforming prior methods.",
"title": ""
},
{
"docid": "e48dae70582d949a60a5f6b5b05117a7",
"text": "Background: Multiple-Valued Logic (MVL) is the non-binary-valued system, in which more than two levels of information content are available, i.e., L>2. In modern technologies, the dual level binary logic circuits have normally been used. However, these suffer from several significant issues such as the interconnection considerations including the parasitics, area and power dissipation. The MVL circuits have been proved to be consisting of reduced circuitry and increased efficiency in terms of higher utilization of the circuit resources through multiple levels of voltage. Innumerable algorithms have been developed for designing such MVL circuits. Extended form is one of the algebraic techniques used in designing these MVL circuits. Voltage mode design has also been employed for constructing various types of MVL circuits. Novelty: This paper proposes a novel MVLTRANS inverter, designed using conventional CMOS and pass transistor logic based MVLPTL inverter. Binary to MVL Converter/Encoder and MVL to binary Decoder/Converter are also presented in the paper. In addition to the proposed decoder circuit, a 4-bit novel MVL Binary decoder circuit is also proposed. Tools Used: All these circuits are designed, implemented and verified using Cadence® Virtuoso tools using 180 nm technology library.",
"title": ""
},
{
"docid": "2df35b05a40a646ba6f826503955601a",
"text": "This paper describes a new prototype system for detecting the demeanor of patients in emergency situations using the Intel RealSense camera system [1]. It describes how machine learning, a support vector machine (SVM) and the RealSense facial detection system can be used to track patient demeanour for pain monitoring. In a lab setting, the application has been trained to detect four different intensities of pain and provide demeanour information about the patient's eyes, mouth, and agitation state. Its utility as a basis for evaluating the condition of patients in situations using video, machine learning and 5G technology is discussed.",
"title": ""
},
{
"docid": "ec3eac65e52b62af3ab1599e643c49ac",
"text": "Topic models for text corpora comprise a popular family of methods that have inspired many extensions to encode properties such as sparsity, interactions with covariates, and the gradual evolution of topics. In this paper, we combine certain motivating ideas behind variations on topic models with modern techniques for variational inference to produce a flexible framework for topic modeling that allows for rapid exploration of different models. We first discuss how our framework relates to existing models, and then demonstrate that it achieves strong performance, with the introduction of sparsity controlling the trade off between perplexity and topic coherence. We have released our code and preprocessing scripts to support easy future comparisons and exploration.",
"title": ""
},
{
"docid": "a863f6be5416181ce8ec6a5cecadb091",
"text": "Anthropometry is a simple reliable method for quantifying body size and proportions by measuring body length, width, circumference (C), and skinfold thickness (SF). More than 19 sites for SF, 17 for C, 11 for width, and 9 for length have been included in equations to predict body fat percent with a standard error of estimate (SEE) range of +/- 3% to +/- 11% of the mean of the criterion measurement. Recent studies indicate that not only total body fat, but also regional fat and skeletal muscle, can be predicted from anthropometrics. Our Rosetta database supports the thesis that sex, age, ethnicity, and site influence anthropometric predictions; the prediction reliabilities are consistently higher for Whites than for other ethnic groups, and also by axial than by peripheral sites (biceps and calf). The reliability of anthropometrics depends on standardizing the caliper and site of measurement, and upon the measuring skill of the anthropometrist. A reproducibility of +/- 2% for C and +/- 10% for SF measurements usually is required to certify the anthropometrist.",
"title": ""
},
{
"docid": "c4d5464727db6deafc2ce2307284dd0c",
"text": "— Recently, many researchers have focused on building dual handed static gesture recognition systems. Single handed static gestures, however, pose more recognition complexity due to the high degree of shape ambiguities. This paper presents a gesture recognition setup capable of recognizing and emphasizing the most ambiguous static single handed gestures. Performance of the proposed scheme is tested on the alphabets of American Sign Language (ASL). Segmentation of hand contours from image background is carried out using two different strategies; skin color as detection cue with RGB and YCbCr color spaces, and thresholding of gray level intensities. A novel, rotation and size invariant, contour tracing descriptor is used to describe gesture contours generated by each segmentation technique. Performances of k-Nearest Neighbor (k-NN) and multiclass Support Vector Machine (SVM) classification techniques are evaluated to classify a particular gesture. Gray level segmented contour traces classified by multiclass SVM achieve accuracy up to 80.8% on the most ambiguous gestures of ASL alphabets with overall accuracy of 90.1%.",
"title": ""
},
{
"docid": "a542b157e4be534997ce720329d2fea5",
"text": "Microcavites contribute to enhancing the optical efficiency and color saturation of an organic light emitting diode (OLED) display. A major tradeoff of the strong cavity effect is its apparent color shift, especially for RGB-based OLED displays, due to their mismatched angular intensity distributions. To mitigate the color shift, in this work we first analyze the emission spectrum shifts and angular distributions for the OLEDs with strong and weak cavities, both theoretically and experimentally. Excellent agreement between simulation and measurement is obtained. Next, we propose a systematic approach for RGB-OLED displays based on multi-objective optimization algorithms. Three objectives, namely external quantum efficiency (EQE), color gamut coverage, and angular color shift of primary and mixed colors, can be optimized simultaneously. Our optimization algorithm is proven to be effective for suppressing color shift while keeping a relatively high optical efficiency and wide color gamut. © 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement OCIS codes: (140.3948) Microcavity devices; (160.4890) Organic materials; (230.3670) Light-emitting diodes;",
"title": ""
},
{
"docid": "81b52831b3b9e9b6384ee4dda84dd741",
"text": "The field of Web development is entering the HTML5 and CSS3 era and JavaScript is becoming increasingly influential. A large number of JavaScript frameworks have been recently promoted. Practitioners applying the latest technologies need to choose a suitable JavaScript framework (JSF) in order to abstract the frustrating and complicated coding steps and to provide a cross-browser compatibility. Apart from benchmark suites and recommendation from experts, there is little research helping practitioners to select the most suitable JSF to a given situation. The few proposals employ software metrics on the JSF, but practitioners are driven by different concerns when choosing a JSF. As an answer to the critical needs, this paper is a call for action. It proposes a research design towards a comparative analysis framework of JSF, which merges researcher needs and practitioner needs.",
"title": ""
},
{
"docid": "1afd50a91b67bd1eab0db1c2a19a6c73",
"text": "In this paper we present syntactic characterization of temporal formulas that express various properties of interest in the verification of concurrent programs. Such a characterization helps us in choosing the right techniques for proving correctness with respect to these properties. The properties that we consider include safety properties, liveness properties and fairness properties. We also present algorithms for checking if a given temporal formula expresses any of these properties.",
"title": ""
},
{
"docid": "ead5432cb390756a99e4602a9b6266bf",
"text": "In this paper, we present a new approach for text localization in natural images, by discriminating text and non-text regions at three levels: pixel, component and text line levels. Firstly, a powerful low-level filter called the Stroke Feature Transform (SFT) is proposed, which extends the widely-used Stroke Width Transform (SWT) by incorporating color cues of text pixels, leading to significantly enhanced performance on inter-component separation and intra-component connection. Secondly, based on the output of SFT, we apply two classifiers, a text component classifier and a text-line classifier, sequentially to extract text regions, eliminating the heuristic procedures that are commonly used in previous approaches. The two classifiers are built upon two novel Text Covariance Descriptors (TCDs) that encode both the heuristic properties and the statistical characteristics of text stokes. Finally, text regions are located by simply thresholding the text-line confident map. Our method was evaluated on two benchmark datasets: ICDAR 2005 and ICDAR 2011, and the corresponding F-measure values are 0.72 and 0.73, respectively, surpassing previous methods in accuracy by a large margin.",
"title": ""
}
] |
scidocsrr
|
14f8b7ea36774ef990759bc644743083
|
Neural Variational Inference for Text Processing
|
[
{
"docid": "55b9284f9997b18d3b1fad9952cd4caa",
"text": "This paper presents a system which learns to answer questions on a broad range of topics from a knowledge base using few handcrafted features. Our model learns low-dimensional embeddings of words and knowledge base constituents; these representations are used to score natural language questions against candidate answers. Training our system using pairs of questions and structured representations of their answers, and pairs of question paraphrases, yields competitive results on a recent benchmark of the literature.",
"title": ""
},
{
"docid": "8b6832586f5ec4706e7ace59101ea487",
"text": "We develop a semantic parsing framework based on semantic similarity for open domain question answering (QA). We focus on single-relation questions and decompose each question into an entity mention and a relation pattern. Using convolutional neural network models, we measure the similarity of entity mentions with entities in the knowledge base (KB) and the similarity of relation patterns and relations in the KB. We score relational triples in the KB using these measures and select the top scoring relational triple to answer the question. When evaluated on an open-domain QA task, our method achieves higher precision across different recall points compared to the previous approach, and can improve F1 by 7 points.",
"title": ""
},
{
"docid": "120e36cc162f4ce602da810c80c18c7d",
"text": "We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm.",
"title": ""
}
] |
[
{
"docid": "c73af0945ac35847c7a86a7f212b4d90",
"text": "We report a case of planned complex suicide (PCS) by a young man who had previously tried to commit suicide twice. He was found dead hanging by his neck, with a shot in his head. The investigation of the scene, the method employed, and previous attempts at suicide altogether pointed toward a suicidal etiology. The main difference between PCS and those cases defined in the medicolegal literature as combined suicides lies in the complex mechanism used by the victim as a protection against a failure in one of the mechanisms.",
"title": ""
},
{
"docid": "d35d96730b71db044fccf4c8467ff081",
"text": "Image steganalysis is to discriminate innocent images and those suspected images with hidden messages. This task is very challenging for modern adaptive steganography, since modifications due to message hiding are extremely small. Recent studies show that Convolutional Neural Networks (CNN) have demonstrated superior performances than traditional steganalytic methods. Following this idea, we propose a novel CNN model for image steganalysis based on residual learning. The proposed Deep Residual learning based Network (DRN) shows two attractive properties than existing CNN based methods. First, the model usually contains a large number of network layers, which proves to be effective to capture the complex statistics of digital images. Second, the residual learning in DRN preserves the stego signal coming from secret messages, which is extremely beneficial for the discrimination of cover images and stego images. Comprehensive experiments on standard dataset show that the DRN model can detect the state of arts steganographic algorithms at a high accuracy. It also outperforms the classical rich model method and several recently proposed CNN based methods.",
"title": ""
},
{
"docid": "738a69ad1006c94a257a25c1210f6542",
"text": "Encrypted data search allows cloud to offer fundamental information retrieval service to its users in a privacy-preserving way. In most existing schemes, search result is returned by a semi-trusted server and usually considered authentic. However, in practice, the server may malfunction or even be malicious itself. Therefore, users need a result verification mechanism to detect the potential misbehavior in this computation outsourcing model and rebuild their confidence in the whole search process. On the other hand, cloud typically hosts large outsourced data of users in its storage. The verification cost should be efficient enough for practical use, i.e., it only depends on the corresponding search operation, regardless of the file collection size. In this paper, we are among the first to investigate the efficient search result verification problem and propose an encrypted data search scheme that enables users to conduct secure conjunctive keyword search, update the outsourced file collection and verify the authenticity of the search result efficiently. The proposed verification mechanism is efficient and flexible, which can be either delegated to a public trusted authority (TA) or be executed privately by data users. We formally prove the universally composable (UC) security of our scheme. Experimental result shows its practical efficiency even with a large dataset.",
"title": ""
},
{
"docid": "5f1c1baca54a8af4bf9c96818ad5b688",
"text": "The spatial organisation of museums and its influence on the visitor experience has been the subject of numerous studies. Previous research, despite reporting some actual behavioural correlates, rarely had the possibility to investigate the cognitive processes of the art viewers. In the museum context, where spatial layout is one of the most powerful curatorial tools available, attention and memory can be measured as a means of establishing whether or not the gallery fulfils its function as a space for contemplating art. In this exploratory experiment, 32 participants split into two groups explored an experimental, non-public exhibition and completed two unanticipated memory tests afterwards. The results show that some spatial characteristics of an exhibition can inhibit the recall of pictures and shift the focus to perceptual salience of the artworks.",
"title": ""
},
{
"docid": "62da9a85945652f195086be0ef780827",
"text": "Fingerprint biometric is one of the most successful biometrics applied in both forensic law enforcement and security applications. Recent developments in fingerprint acquisition technology have resulted in touchless live scan devices that generate 3D representation of fingerprints, and thus can overcome the deformation and smearing problems caused by conventional contact-based acquisition techniques. However, there are yet no 3D full fingerprint databases with their corresponding 2D prints for fingerprint biometric research. This paper presents a 3D fingerprint database we have established in order to investigate the 3D fingerprint biometric comprehensively. It consists of 3D fingerprints as well as their corresponding 2D fingerprints captured by two commercial fingerprint scanners from 150 subjects in Australia. Besides, we have tested the performance of 2D fingerprint verification, 3D fingerprint verification, and 2D to 3D fingerprint verification. The results show that more work is needed to improve the performance of 2D to 3D fingerprint verification. In addition, the database is expected to be released publicly in late 2014.",
"title": ""
},
{
"docid": "49e616b9db5ba5003ae01abfb6ed3e16",
"text": "BACKGROUND\nAlthough substantial evidence suggests that stressful life events predispose to the onset of episodes of depression and anxiety, the essential features of these events that are depressogenic and anxiogenic remain uncertain.\n\n\nMETHODS\nHigh contextual threat stressful life events, assessed in 98 592 person-months from 7322 male and female adult twins ascertained from a population-based registry, were blindly rated on the dimensions of humiliation, entrapment, loss, and danger and their categories. Onsets of pure major depression (MD), pure generalized anxiety syndrome (GAS) (defined as generalized anxiety disorder with a 2-week minimum duration), and mixed MD-GAS episodes were examined using logistic regression.\n\n\nRESULTS\nOnsets of pure MD and mixed MD-GAS were predicted by higher ratings of loss and humiliation. Onsets of pure GAS were predicted by higher ratings of loss and danger. High ratings of entrapment predicted only onsets of mixed episodes. The loss categories of death and respondent-initiated separation predicted pure MD but not pure GAS episodes. Events with a combination of humiliation (especially other-initiated separation) and loss were more depressogenic than pure loss events, including death. No sex differences were seen in the prediction of episodes of illness by event categories.\n\n\nCONCLUSIONS\nIn addition to loss, humiliating events that directly devalue an individual in a core role were strongly linked to risk for depressive episodes. Event dimensions and categories that predispose to pure MD vs pure GAS episodes can be distinguished with moderate specificity. The event dimensions that preceded mixed MD-GAS episodes were largely the sum of those that preceded pure MD and pure GAS episodes.",
"title": ""
},
{
"docid": "865306ad6f5288cf62a4082769e8068a",
"text": "The rapid growth of the Internet has brought with it an exponential increase in the type and frequency of cyber attacks. Many well-known cybersecurity solutions are in place to counteract these attacks. However, the generation of Big Data over computer networks is rapidly rendering these traditional solutions obsolete. To cater for this problem, corporate research is now focusing on Security Analytics, i.e., the application of Big Data Analytics techniques to cybersecurity. Analytics can assist network managers particularly in the monitoring and surveillance of real-time network streams and real-time detection of both malicious and suspicious (outlying) patterns. Such a behavior is envisioned to encompass and enhance all traditional security techniques. This paper presents a comprehensive survey on the state of the art of Security Analytics, i.e., its description, technology, trends, and tools. It hence aims to convince the reader of the imminent application of analytics as an unparalleled cybersecurity solution in the near future.",
"title": ""
},
{
"docid": "485f7998056ef7a30551861fad33bef4",
"text": "Research has shown close connections between personality and subjective well-being (SWB), suggesting that personality traits predispose individuals to experience different levels of SWB. Moreover, numerous studies have shown that self-efficacy is related to both personality factors and SWB. Extending previous research, we show that general self-efficacy functionally connects personality factors and two components of SWB (life satisfaction and subjective happiness). Our results demonstrate the mediating role of self-efficacy in linking personality factors and SWB. Consistent with our expectations, the influence of neuroticism, extraversion, openness, and conscientiousness on life satisfaction was mediated by self-efficacy. Furthermore, self-efficacy mediated the influence of openness and conscientiousness, but not that of neuroticism and extraversion, on subjective happiness. Results highlight the importance of cognitive beliefs in functionally linking personality traits and SWB.",
"title": ""
},
{
"docid": "e104544e8ac61ea6d77415df1deeaf81",
"text": "This thesis is devoted to marker-less 3D human motion tracking in calibrated and synchronized multicamera systems. Pose estimation is based on a 3D model, which is transformed into the image plane and then rendered. Owing to elaborated techniques the tracking of the full body has been achieved in real-time via dynamic optimization or dynamic Bayesian filtering. The objective function of a particle swarm optimization algorithm and the observation model of a particle filter are based on matching between the rendered 3D models in the required poses and image features representing the extracted person. In such an approach the main part of the computational overload is associated with the rendering of 3D models in hypothetical poses as well as determination of value of objective function. Effective methods for rendering of 3D models in real-time with support of OpenGL as well as parallel methods for determining the objective function on the GPU were developed. The elaborated solutions permit 3D tracking of full body motion in real-time.",
"title": ""
},
{
"docid": "ea8b083238554866d36ac41b9c52d517",
"text": "A fully automatic document retrieval system operating on the IBM 7094 is described. The system is characterized by the fact that several hundred different methods are available to analyze documents and search requests. This feature is used in the retrieval process by leaving the exact sequence of operations initially unspecified, and adapting the search strategy to the needs of individual users. The system is used not only to simulate an actual operating environment, but also to test the effectiveness of the various available processing methods. Results obtained so far seem to indicate that some combination of analysis procedures can in general be relied upon to retrieve the wanted information. A typical search request is used as an example in the present report to illustrate systems operations and evaluation procedures .",
"title": ""
},
{
"docid": "d44080fc547355ff8389f9da53d03c45",
"text": "High profile attacks such as Stuxnet and the cyber attack on the Ukrainian power grid have increased research in Industrial Control System (ICS) and Supervisory Control and Data Acquisition (SCADA) network security. However, due to the sensitive nature of these networks, there is little publicly available data for researchers to evaluate the effectiveness of the proposed solution. The lack of representative data sets makes evaluation and independent validation of emerging security solutions difficult and slows down progress towards effective and reusable solutions. This paper presents our work to generate representative labeled data sets for SCADA networks that security researcher can use freely. The data sets include packet captures including both malicious and non-malicious Modbus traffic and accompanying CSV files that contain labels to provide the ground truth for supervised machine learning. To provide representative data at the network level, the data sets were generated in a SCADA sandbox, where electrical network simulators were used to introduce realism in the physical component. Also, real attack tools, some of them custom built for Modbus networks, were used to generate the malicious traffic. Even though they do not fully replicate a production network, these data sets represent a good baseline to validate detection tools for SCADA systems.",
"title": ""
},
{
"docid": "c3ca913fa81b2e79a2fff6d7a5e2fea7",
"text": "We present Query-Regression Network (QRN), a variant of Recurrent Neural Network (RNN) that is suitable for end-to-end machine comprehension. While previous work [18, 22] largely relied on external memory and global softmax attention mechanism, QRN is a single recurrent unit with internal memory and local sigmoid attention. Unlike most RNN-based models, QRN is able to effectively handle long-term dependencies and is highly parallelizable. In our experiments we show that QRN obtains the state-of-the-art result in end-to-end bAbI QA tasks [21].",
"title": ""
},
{
"docid": "dbfbdd4866d7fd5e34620c82b8124c3a",
"text": "Searle (1989) posits a set of adequacy criteria for any account of the meaning and use of performative verbs, such as order or promise. Central among them are: (a) performative utterances are performances of the act named by the performative verb; (b) performative utterances are self-verifying; (c) performative utterances achieve (a) and (b) in virtue of their literal meaning. He then argues that the fundamental problem with assertoric accounts of performatives is that they fail (b), and hence (a), because being committed to having an intention does not guarantee having that intention. Relying on a uniform meaning for verbs on their reportative and performative uses, we propose an assertoric analysis of performative utterances that does not require an actual intention for deriving (b), and hence can meet (a) and (c). Explicit performative utterances are those whose illocutionary force is made explicit by the verbs appearing in them (Austin 1962): (1) I (hereby) promise you to be there at five. (is a promise) (2) I (hereby) order you to be there at five. (is an order) (3) You are (hereby) ordered to report to jury duty. (is an order) (1)–(3) look and behave syntactically like declarative sentences in every way. Hence there is no grammatical basis for the once popular claim that I promise/ order spells out a ‘performative prefix’ that is silent in all other declaratives. Such an analysis, in any case, leaves unanswered the question of how illocutionary force is related to compositional meaning and, consequently, does not explain how the first person and present tense are special, so that first-person present tense forms can spell out performative prefixes, while others cannot. Minimal variations in person or tense remove the ‘performative effect’: (4) I promised you to be there at five. (is not a promise) (5) He promises to be there at five. (is not a promise) An attractive idea is that utterances of sentences like those in (1)–(3) are asser∗ The names of the authors appear in alphabetical order. 150 Condoravdi & Lauer tions, just like utterances of other declaratives, whose truth is somehow guaranteed. In one form or another, this basic strategy has been pursued by a large number of authors ever since Austin (1962) (Lemmon 1962; Hedenius 1963; Bach & Harnish 1979; Ginet 1979; Bierwisch 1980; Leech 1983; among others). One type of account attributes self-verification to meaning proper. Another type, most prominently exemplified by Bach & Harnish (1979), tries to derive the performative effect by means of an implicature-like inference that the hearer may draw based on the utterance of the explicit performative. Searle’s (1989) Challenge Searle (1989) mounts an argument against analyses of explicit performative utterances as self-verifying assertions. He takes the argument to show that an assertoric account is impossible. Instead, we take it to pose a challenge that can be met, provided one supplies the right semantics for the verbs involved. Searle’s argument is based on the following desiderata he posits for any theory of explicit performatives: (a) performative utterances are performances of the act named by the performative verb; (b) performative utterances are self-guaranteeing; (c) performative utterances achieve (a) and (b) in virtue of their literal meaning, which, in turn, ought to be based on a uniform lexical meaning of the verb across performative and reportative uses. According to Searle’s speech act theory, making a promise requires that the promiser intend to do so, and similarly for other performative verbs (the sincerity condition). It follows that no assertoric account can meet (a-c): An assertion cannot ensure that the speaker has the necessary intention. “Such an assertion does indeed commit the speaker to the existence of the intention, but the commitment to having the intention doesn’t guarantee the actual presence of the intention.” Searle (1989: 546) Hence assertoric accounts must fail on (b), and, a forteriori, on (a) and (c).1 Although Searle’s argument is valid, his premise that for truth to be guaranteed the speaker must have a particular intention is questionable. In the following, we give an assertoric account that delivers on (a-c). We aim for an 1 It should be immediately clear that inference-based accounts cannot meet (a-c) above. If the occurrence of the performative effect depends on the hearer drawing an inference, then such sentences could not be self-verifying, for the hearer may well fail to draw the inference. Performative Verbs and Performative Acts 151 account on which the assertion of the explicit performative is the performance of the act named by the performative verb. No hearer inferences are necessary. 1 Reportative and Performative Uses What is the meaning of the word order, then, so that it can have both reportative uses – as in (6) – and performative uses – as in (7)? (6) A ordered B to sign the report. (7) [A to B] I order you to sign the report now. The general strategy in this paper will be to ask what the truth conditions of reportative uses of performative verbs are, and then see what happens if these verbs are put in the first person singular present tense. The reason to start with the reportative uses is that speakers have intuitions about their truth conditions. This is not true for performative uses, because these are always true when uttered, obscuring the truth-conditional content of the declarative sentence.2 An assertion of (6) takes for granted that A presumed to have authority over B and implies that there was a communicative act from A to B. But what kind of communicative act? (7) or, in the right context, (8a-c) would suffice. (8) a. Sign the report now! b. You must sign the report now! c. I want you to sign the report now! What do these sentences have in common? We claim it is this: In the right context they commit A to a particular kind of preference for B signing the report immediately. If B accepts the utterance, he takes on a commitment to act as though he, too, prefers signing the report. If the report is co-present with A and B, he will sign it, if the report is in his office, he will leave to go there immediately, and so on. To comply with an order to p is to act as though one prefers p. One need not actually prefer it, but one has to act as if one did. The authority mentioned above amounts to this acceptance being socially or institutionally mandated. Of course, B has the option to refuse to take on this commitment, in either of two ways: (i) he can deny A’s authority, (ii) while accepting the authority, he can refuse to abide by it, thereby violating the institutional or social mandate. Crucially, in either case, (6) will still be true, as witnessed by the felicity of: 2 Szabolcsi (1982), in one of the earliest proposals for a compositional semantics of performative utterances, already pointed out the importance of reportative uses. 152 Condoravdi & Lauer (9) a. (6), but B refused to do it. b. (6), but B questioned his authority. Not even uptake by the addressee is necessary for order to be appropriate, as seen in (10) and the naturally occurring (11):3 (10) (6), but B did not hear him. (11) He ordered Kornilov to desist but either the message failed to reach the general or he ignored it.4 What is necessary is that the speaker expected uptake to happen, arguably a minimal requirement for an act to count as a communicative event. To sum up, all that is needed for (6) to be true and appropriate is that (i) there is a communicative act from A to B which commits A to a preference for B signing the report immediately and (ii) A presumes to have authority over B. The performative effect arises precisely when the utterance itself is a witness for the existential claim in (i). There are two main ingredients in the meaning of order informally outlined above: the notion of a preference, in particular a special kind of preference that guides action, and the notion of a commitment. The next two sections lay some conceptual groundwork before we spell out our analysis in section 4. 2 Representing Preferences To represent preferences that guide action, we need a way to represent preferences of different strength. Kratzer’s (1981) theory of modality is not suitable for this purpose. Suppose, for instance, that Sven desires to finish his paper and that he also wants to lie around all day, doing nothing. Modeling his preferences in the style of Kratzer, the propositions expressed by (12) and (13) would have to be part of Sven’s bouletic ordering source assigned to the actual world: (12) Sven finishes his paper. (13) Sven lies around all day, doing nothing. But then, Sven should be equally happy if he does nothing as he is if he finishes his paper. We want to be able to explain why, given his knowledge that (12) and (13) are incompatible, he works on his paper. Intuitively, it is because the preference expressed by (12) is more important than that expressed by (13). 3 We owe this observation to Lauri Karttunen. 4 https://tspace.library.utoronto.ca/citd/RussianHeritage/12.NR/NR.12.html Performative Verbs and Performative Acts 153 Preference Structures Definition 1. A preference structure relative to an information state W is a pair 〈P,≤〉, where P⊆℘(W ) and ≤ is a (weak) partial order on P. We can now define a notion of consistency that is weaker than requiring that all propositions in the preference structure be compatible: Definition 2. A preference structure 〈P,≤〉 is consistent iff for any p,q ∈ P such that p∩q = / 0, either p < q or q < p. Since preference structures are defined relative to an information state W , consistency will require not only logically but also contextually incompatible propositions to be strictly ranked. For example, if W is Sven’s doxastic state, and he knows that (12) and (13) are incompatible, for a bouletic preference structure of his to be consistent it must strictly rank the two propositions. In general, bouletic preference ",
"title": ""
},
{
"docid": "5e60c55f419c7d62f4eeb9165e7f107c",
"text": "Background : Agile software development has become a popular way of developing software. Scrum is the most frequently used agile framework, but it is often reported to be adapted in practice. Objective: Thus, we aim to understand how Scrum is adapted in different contexts and what are the reasons for these changes. Method : Using a structured interview guideline, we interviewed ten German companies about their concrete usage of Scrum and analysed the results qualitatively. Results: All companies vary Scrum in some way. The least variations are in the Sprint length, events, team size and requirements engineering. Many users varied the roles, effort estimations and quality assurance. Conclusions: Many variations constitute a substantial deviation from Scrum as initially proposed. For some of these variations, there are good reasons. Sometimes, however, the variations are a result of a previous non-agile, hierarchical organisation.",
"title": ""
},
{
"docid": "72e9e772ede3d757122997d525d0f79c",
"text": "Deep learning systems, such as Convolutional Neural Networks (CNNs), can infer a hierarchical representation of input data that facilitates categorization. In this paper, we propose to learn affect-salient features for Speech Emotion Recognition (SER) using semi-CNN. The training of semi-CNN has two stages. In the first stage, unlabeled samples are used to learn candidate features by contractive convolutional neural network with reconstruction penalization. The candidate features, in the second step, are used as the input to semi-CNN to learn affect-salient, discriminative features using a novel objective function that encourages the feature saliency, orthogonality and discrimination. Our experiment results on benchmark datasets show that our approach leads to stable and robust recognition performance in complex scenes (e.g., with speaker and environment distortion), and outperforms several well-established SER features.",
"title": ""
},
{
"docid": "021bf98bc0ff722d9a44c5ef5e73f3c8",
"text": "BACKGROUND\nMalignant bowel obstruction is a highly symptomatic, often recurrent, and sometimes refractory condition in patients with intra-abdominal tumor burden. Gastro-intestinal symptoms and function may improve with anti-inflammatory, anti-secretory, and prokinetic/anti-nausea combination medical therapy.\n\n\nOBJECTIVE\nTo describe the effect of octreotide, metoclopramide, and dexamethasone in combination on symptom burden and bowel function in patients with malignant bowel obstruction and dysfunction.\n\n\nDESIGN\nA retrospective case series of patients with malignant bowel obstruction (MBO) and malignant bowel dysfunction (MBD) treated by a palliative care consultation service with octreotide, metoclopramide, and dexamethasone. Outcomes measures were nausea, pain, and time to resumption of oral intake.\n\n\nRESULTS\n12 cases with MBO, 11 had moderate/severe nausea on presentation. 100% of these had improvement in nausea by treatment day #1. 100% of patients with moderate/severe pain improved to tolerable level by treatment day #1. The median time to resumption of oral intake was 2 days (range 1-6 days) in the 8 cases with evaluable data. Of 7 cases with MBD, 6 had For patients with malignant bowel dysfunction, of those with moderate/severe nausea. 5 of 6 had subjective improvement by day#1. Moderate/severe pain improved to tolerable levels in 5/6 by day #1. Of the 4 cases with evaluable data on resumption of PO intake, time to resume PO ranged from 1-4 days.\n\n\nCONCLUSION\nCombination medical therapy may provide rapid improvement in symptoms associated with malignant bowel obstruction and dysfunction.",
"title": ""
},
{
"docid": "ebea79abc60a5d55d0397d21f54cc85e",
"text": "The increasing availability of large-scale location traces creates unprecedent opportunities to change the paradigm for knowledge discovery in transportation systems. A particularly promising area is to extract useful business intelligence, which can be used as guidance for reducing inefficiencies in energy consumption of transportation sectors, improving customer experiences, and increasing business performances. However, extracting business intelligence from location traces is not a trivial task. Conventional data analytic tools are usually not customized for handling large, complex, dynamic, and distributed nature of location traces. To that end, we develop a taxi business intelligence system to explore the massive taxi location traces from different business perspectives with various data mining functions. Since we implement the system using the real-world taxi GPS data, this demonstration will help taxi companies to improve their business performances by understanding the behaviors of both drivers and customers. In addition, several identified technical challenges also motivate data mining people to develop more sophisticate techniques in the future.",
"title": ""
},
{
"docid": "72b4ca7dbbd4cc0cca0779b1e9fdafe4",
"text": "As population structure can result in spurious associations, it has constrained the use of association studies in human and plant genetics. Association mapping, however, holds great promise if true signals of functional association can be separated from the vast number of false signals generated by population structure. We have developed a unified mixed-model approach to account for multiple levels of relatedness simultaneously as detected by random genetic markers. We applied this new approach to two samples: a family-based sample of 14 human families, for quantitative gene expression dissection, and a sample of 277 diverse maize inbred lines with complex familial relationships and population structure, for quantitative trait dissection. Our method demonstrates improved control of both type I and type II error rates over other methods. As this new method crosses the boundary between family-based and structured association samples, it provides a powerful complement to currently available methods for association mapping.",
"title": ""
},
{
"docid": "b466803c9a9be5d38171ece8d207365e",
"text": "A large number of saliency models, each based on a different hypothesis, have been proposed over the past 20 years. In practice, while subscribing to one hypothesis or computational principle makes a model that performs well on some types of images, it hinders the general performance of a model on arbitrary images and large-scale data sets. One natural approach to improve overall saliency detection accuracy would then be fusing different types of models. In this paper, inspired by the success of late-fusion strategies in semantic analysis and multi-modal biometrics, we propose to fuse the state-of-the-art saliency models at the score level in a para-boosting learning fashion. First, saliency maps generated by several models are used as confidence scores. Then, these scores are fed into our para-boosting learner (i.e., support vector machine, adaptive boosting, or probability density estimator) to generate the final saliency map. In order to explore the strength of para-boosting learners, traditional transformation-based fusion strategies, such as Sum, Min, and Max, are also explored and compared in this paper. To further reduce the computation cost of fusing too many models, only a few of them are considered in the next step. Experimental results show that score-level fusion outperforms each individual model and can further reduce the performance gap between the current models and the human inter-observer model.",
"title": ""
},
{
"docid": "cc7179392883ad12d42469fc7e1b3e01",
"text": "A low-profile broadband dual-polarized patch subarray is designed in this letter for a highly integrated X-band synthetic aperture radar payload on a small satellite. The proposed subarray is lightweight and has a low profile due to its tile structure realized by a multilayer printed circuit board process. The measured results confirm that the subarray yields 14-dB bandwidths from 9.15 to 10.3 GHz for H-pol and from 9.35 to 10.2 GHz for V-pol. The isolation remains better than 40 dB. The average realized gains are approximately 13 dBi for both polarizations. The sidelobe levels are 25 dB for H-pol and 20 dB for V-pol. The relative cross-polarization levels are 30 dB within the half-power beamwidth range.",
"title": ""
}
] |
scidocsrr
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.