PhD Opportunities

The following research projects are available to postgraduate applicants as potential PhD projects in the School of Computing. 

Machine Learning

Text Analytics

Spatial Data

Visual Analytics

Dialogue Systems

Machine Translation

Sensor-Based computing

Natural Language Processing

Speech, Audio, Digital Signal Processing

Management & Innovation

Distributed Computing

Software Defined Networking

Argumentation Theory

Medical Informatics

Computational Trust

Cognitive Load Modelling

Other

IT in Education

 


A reasoning engine to take decision with vague, conflicting and uncertain Information. Extending Abstract Argumentation to Graded and Vague Concepts.

Two ubiquitous characteristics of human decisions are the following: they often happen in situation of conflicting information and the concepts involved in the process are often vague and partially true. Practitioners in AI have developed sophisticated systems to manage uncertainty in decisions-making. Multi-valued and fuzzy logic are suitable to model vague and partially true information, while recent works in Abstract Argumentation Theory have produced a set elegant and sound semantics to manage conflicts resolution between conflicting evidence. However, Abstract argumentation is only able to deal with Boolean evidence, and it has little or no support for any sort of fuzzy, uncertain or partially true argument. On the other side, fuzzy logic systems do not embed any mechanisms of conflict resolution comparable with the ones produced by Abstract Argumentation. The aim of this PhD project is to extend the state-of-the-art of reasoning system by investigating how fuzzy reasoning can be integrated into abstract argumentation theory. The challenges are to define a framework with the following characteristics: sound in the decisions produced, able to explain and justify its decisions, able to integrate different multi-valued logic semantics, and computationally feasible. The framework will be integrated with complementary uncertainty management approach, such as probability and possibility theory. Implemented version of the framework will be evaluated in a diverse set of fields such as clinical decision making process and financial risk assessment.

More information available from Dr Pierpaolo Dondio

Top


Exploring Representation in Text Analytics

Feature representation of textual content can vary from the traditional Bag-of-Words through word and character n-grams through feature ehanced representations such as Latent Semantic Indexing and more recently word embeddings available from deep learning type implementations such as wordtovec.   The object of this project is to explore the impact of different representations on typical text analytics applications such as topic modelling, sentiment analysis or document classification.  One focus of the investigation and exploration is to consider the feasibility and practical application of different representations in a real world environment faced by challenges such as response time, multi-language etc.

More information available from Dr Sarah Jane Delany

Top


Quality Control in Crowdsourcing 

Online crowdsourcing enables tasks to be performed by a large number of people.  There are a variety of micro working platforms, such as Amazon Mechanical Turk, which provide crowdsourcing services.   Workers are normally rewarded with a payment reward for completing tasks.    Tasks normally require human input and can vary from elementary basic tasks such as labelling images or translating a phrase to more complex tasks that may require expert knowledge such as diagnosis from images.  A major challenge with crowdsourcing is to ensure good quality work for the available budget.    Research (e.g., Marge, Banerjee, and Rudnicky 2010) has shown that there is no correlation between the payment reward and the final quality as increasing the price is believed to attract spammers (e.g. those who cheat by answering randomly to get the payment).

 There are a variety of quality control mechanisms in play in crowdsourcing such as pre-work tests, statistical filtering, using expert review or using worker reputation.    The objective of this project is to explore the question of how to get the best quality outcomes in crowdsourcing scenarios.

More information available from Dr Sarah Jane Delany

Top


Contextual Annotation of Unstructured Text

With the advances in social media and online frameworks for user generated content there is an explosion of online content, photos, videos, blogs, etc.  Most of the non textual online content has associated unstructured text labels or comments which can provide additional context.   This context is useful for a variety of use cases, e.g. to facilitate the categorisation of online content or to facilitate ontology population in specific domains.   

This project focusses on exploring mechanisms to annotate and enrich unstructured textual content to provide additional relevant context.  The project will involve research in a variety of areas including natural language processing, named entity recognition, linked open data, automatic annotation and ontology construction.

More information available from Dr Sarah Jane Delany

Top


Enhancing Decision Making with Argumentation Theory

Argumentation theory (AT) is a new important multi-disciplinary topic in Artificial Intelligence (AI) that incorporates element of philosophy, psychology and sociology and that studies how people reason and express their arguments. It systematically investigates how arguments can be built, sustained or discarded in a defeasible reasoning process and the validity of the conclusions reached through resolutions of potential inconsistencies. Because of its simplicity and modularity compared to other reasoning approaches, AT has gaining importance for enhancing decision-making. This project aims to study the impact of defeasible reasoning and formal models of argumentation theory for supporting and enhancing decision-making. Multiple fields of application will be tested against state-of-the-art approaches: decision-making in health care, multi-agent systems, trust and the Web.

More information available from Dr Luca Longo

Top


Mental Workload and Usability Methods for enhancing Web Design

The demands of evaluating usability of interactive systems have produced, in the last decades, various assessment procedures. Often, in the context of web-design, when selecting an appropriate procedure, it is desirable to take into account the required effort and expense to collect and analyse data. For this reason, web-designers have tended to adopt cheap subjective usability assessment techniques for enhancing their systems. However, there is a tendency to overlook aspects of the context and characteristics of the users during the usability assessment process. For instance, assessing usability in testing environments is different than assessing it in operational environments. Similarly, a skilled person is likely to perceive usability differently than an unexperienced person. For this reason the notion of performance is acquiring importance for enhancing web-design. However, assessing performance is not a trivial task and many computational methods and measurement techniques have been proposed. One importance construct that is strictly connected to performance is human Mental Workload (MWL). (often referred to as Cognitive Workload). Several MWL assessment procedures have beed proposed in the literature but a measure that can be applied for web-design is lacking. Similarly, recent studies have tried to employ the concept of MWL jointly with the notion of usability. However, despite this interest, not much has been done to link these two concepts together and investigate their relationship.

The aim of this research study is to shed light about the correlation of these two concepts and to design a computational model of mental workload assessment that will be tested with user studies and empirically evaluated in the context of web-design.

More information available from Dr Luca Longo

Top


Enhancing the representation of the construct of Human Mental Workload with Argumentation Theory and defeasible reasoning

Argumentation theory (AT) is a new important multi-disciplinary topic in Artificial Intelligence (AI) that incorporates element of philosophy, psychology and sociology and that studies how people reason and express their arguments. It systematically investigates how arguments can be built, sustained or discarded in a defeasible reasoning process and the validity of the conclusions reached through resolutions of potential inconsistencies. Because of its simplicity and modularity compared to other reasoning approaches, AT has gaining importance for enhancing knowledge-representation. This project aims to study the impact of defeasible reasoning and formal models of AT for enhancing the representation of the ill-defined construct of human mental workload (MLW), an important interaction design concept in human-computer interaction (HCI). The argumentation theory approach will be compared against other knowledge-representation approaches.

More information available from Dr Luca Longo

Top


Computational Trust: automatic assessment of trust of online information

The scientific research in the area of computational mechanisms for trust and reputation in virtual societies is an emerging discipline within Artificial Intelligence. It is aimed at increasing the reliability, trust and performance of electronic communities and online information. Computer science has moved, in the last decades, from the paradigm of isolated machines to the paradigm of networks and distributed computing. Similarly, Artificial Intelligence is quickly shifting from the paradigm of isolated and non-situated intelligence to the paradigm of situated, collective and social intelligence. This new paradigm as well as the emergence of the information society technologies are responsible for the increasing interest on trust and reputation techniques applied to public online information, communities and social networks. This study is aimed at invetigating the nature of trust, the factors that affect trust of online information and the design of a computational model for assessing trust. This will be evaluated in empirical terms with user studies involving several online web-sites and people.

More information available from Dr Luca Longo

Top


Storage And Indexing of Large Point Cloud Data

In recent years, three-dimensional (3D) data has become increasingly available, in part as a result of significant technological progresses in Light Detection and Ranging (LiDAR). 
 
LiDAR provides longitude and latitude information delivered in conjunction with a GPS device, and elevation information generated by a pulse or phase laser scanner, which together provide an effective way of acquiring accurate 3D information of a terrestrial or manmade feature. The main advantages of LiDAR over conventional surveying methods lie in the high accuracy of the data and the relatively little time needed to scan large geographical areas. LiDAR scans provide a vast amount of data points that result in especially rich, complex point clouds. Spatial Information Systems (SISs) are critical to the hosting , querying, and analysing of such data sets. Feature-rich SISs have been well-documented. 
 
However, the implementation of support for 3D capabilities in such systems is only recently been addressed. And large point cloud data is not their main focal point. This project aims to overcome shortcomings of current technology and provides support for storing, querying and analysing LiDAR data without the need of Digital Elevation Models (DEMs) or Triangular Irregular Networks (TINs), but harvesting the information in its point cloud nature.

More information available from Dr Bianca Schoen-Phelan

Top


Spatial Data Analytics

Lineage, positional accuracy, attribute accuracy, completeness and semantic accuracy are just some of the quality assessment dimensions used to determine the quality of spatial data. Naturally, cartographers have been traditionally most occupied with the quality of spatial data. More recently though, most likely through the relatively novel spatial data acquisition, such as Light Detection and Ranging (LiDAR), spatial data quality is an issue that now interests a myriad of disciplines. Being enveloped as a a subset of the recent big data explosion, the increasing ease of using and providing spatial data means that spatial data statistics are an increasingly important area of research. For example, geographically weighted regression (GWR) is one of the techniques used to explain relationships between variables that otherwise cannot be explained with global models. 
This research harnesses the emergence of multiple spatial data sources for meaningful statistical data analysis.

More information available from Dr Bianca Schoen-Phelan

Top


High Performance Spatial Analytics

For the last decade the importance and pervasiveness of spatial data has been repeatedly mentioned in the media as well as research publications. Everyday Geographic Information Systems (GIS) have gained popularity mainly through satellite navigation devices and software for phones. The availability of spatial data has increased likewise with an increase of diverse, socially volunteered spatial information either as a by-product of human communication or specifically on platforms such as OpenStreetMap. This in combination with the general big data trend, means that we now face the positive challenge of analysing these large and diverse spatial data sets in a meaningful way.

Gaining inspiration from the non-spatial domain for large data analysis we see a trend shift from traditional relational databases with their data warehouses towards a combination of relational databases with NoSQL and MapReduce systems to aid with the management and analysis of large datasets. Spatial databases have been around for several decades, and they have certainly evolved to strong maturity and widespread usage as standalone systems and as a backend to GIS, that would have traditionally mainly relied on file based data management. The spatial research community is slowly exploring NoSQL capabilities for management and analytics of spatial information. Some first efforts to incorporate MapReduce for spatial data warehouses have emerged. Advantages of NoSQL and MapReduce certainly lie in their flexible data schemas and massive parallel processing capabilities.

This project proposes to advance knowledge in this particular area with a view to combine traditional spatial database syestem with NoSQL for better spatial analytics and consequently, faster near real-time decision making. Specifically, this research is interested in using NoSQL graph databases for relationship analysis of volunteered spatial data. The need for such research arose from the very advantage that crowd-sourced spatial data brought: multiple sources referencing the same urban structure or spatial location; which often results in poor recognition of their commonalities and relationships.

More information available from Dr Bianca Schoen-Phelan

Top


Spatial Data Models for IoT Real-Time Data Analytics

The Internet of Things (IoT) finds more and more adoption since the term was first coined in the late 1990s. It describes an idea that machines, sensors and devices would work together and independelty to measure and analyse their environment. Since then the IoT has slowly started to pervade everyday life. The IoT enjos far reaching popularity particularly as part of the WHO Healthy Cities Network initiative and within the smarter cities community, as can be seen by a showcasing of the city of Barcelona, Spain.

 

The IoT produces a wide range of data that needs to be stored and further analysed. This data can be diverse from weather information, to usage levels of garbage bins and noise measurements. All of these various data sources have in common that they rely on a geographic location.

 

Databases are often used for storing large sets of data, especially where concurrent and web access needs to be facilitated. Since the inception of databases in the late 1970s, database models have always had one goal: to accommodate the purpose of the specific data usage. For example, Entity Relationship Models are used specifically for transactional processing, whereas we have seen different models emerge for analytics purposes in data warehouses (star schema and snowflake schema). Spatial Databases are used for the storing and access management of data that is of a spatial nature. Our spatially related sensor and location data from the IoT benefits from this technology. A successful IoT network needs high-velocity data ingestions, spatial data models and analytics and real-time query execution.

Data models for Spatial Databases are typically categorised into either vector or raster data, and can be discrete or continuous (though vector data does not lend itself very well to storing continuous data). Originally, these models have been built for relatively small datasets that change slowly. The IoT provides the opposite, and thus new spatial data models are needed to accommodate this exciting use case. 

More information available from Dr Bianca Schoen-Phelan

Top


Context Merge from Crowd-Sourced Spatial Data

This project makes a contribution in the area of aggregating and merging crowd-sourced information in order to integrate text, image and spatial data layers. The recent decade has seen an unprecedented surge in publicly available crowd-sourced data and even large companies, such as Google have started to embrace the potential in their Google maps traffic information feature. Crowd-sourced projects have enjoyed a wide recognition and utilisation since having been mentioned in as such for the first time around 2006 in Wire magazine. It describes the phenomenon of a group of people of the general public providing resources and time in order to accomplish a project.

Nearly in parallel, we see that service providers of traditionally only text-based information carriers, such as twitter, have started to incorporate location to their feeds. On one side this novel opportunity to harness new sources of crowd-sourced spatial information is promising. On the other side, the biggest challenge lies in merging spatial information that references the same location or spatial structure, but originates from heterogeneous data sources in order to perform meaningful analysis. This project bridges this gap by designing novel techniques for spatial context merge.

More information available from Dr Bianca Schoen-Phelan

Top


 Clinical Decision Support Systems

A large number of Clinical Decision Support Systems (CDSS) exist which can be used by clinicians to support their role in healthcare. However,recent changes to international regulations i.e. European and FDA,regarding the development of such software has resulted in a large number of these applications being banned from use as they do not adhere to the current regulations. Currently, there is only a small number of CDSS which have been approved for use in a clinical environment. The producers of these applications have provided the necessary information to regulatory bodies proving the safety and effectiveness of their software.

This research will develop a CDSS which can be executed on a mobile device. Currently, only one CDSS application designed for a mobile device is approved for use within the EU. A CDSS will be developed in accordance with the current international medical device software standards such as IEC 62304 and ISO 13485, and regulations such as FDA CFR 21 820 Part 21 and MDD 2007/47/EC, to accompany this a roadmap will be produced which CDSS development organisations can follow when developing regulatory compliant software.

More information available from Dr Martin Mc Hugh 

Top


University-Industry linkages for Software Innovation

Software innovation includes advances in products, processes or services and occurs in all sectors of the economy. Disruptive technologies such as mobile, cloud computing and web 2.0 have transformed the software sector in recent years making much previous research obsolete. OECD research from 18 countries showed that young Small to Medium-sized Enterprises (SMEs) are primarily responsible up to 42% of total job creation over the last decade. SMEs represent 90% of all businesses in the European Union for example. SMEs have distinctive characteristics and are often at a disadvantage when competing for skills and talent with larger firms, particularly multinationals in the ICT sector. Effective knowledge flow between centres of software innovation expertise and the companies that need this expertise is therefore critical for economic success. 

Few research studies empirically examine the role of universities in innovation by discipline or by industry although the disciplines and the environment in which knowledge is produced are known to be important factors in knowledge transfer and innovation. The range of potential mechanisms for University-Industry Linkages is extensive – including collaborative projects; student projects; contract research; scientific publications; patents; technology licences; incubation space, sponsorship of postgraduates, participation in conferences and other networking events. Some mechanisms are more effective than others however and we need to determine which activities work best for in U-I linkages for software innovation for the SME sector. 

 Working with over ten international partners based in major ICT hubs in Europe, Asia and elsewhere, this research will build on existing research by focussing on the distinctive dynamics of U-I linkages for software innovation. It will develop and evaluate a U-I model(s) which ensures SMEs in all sectors can access software innovation expertise available in universities in an effective way.

More information available from Dr Deirdre Lillis

Top


Managing complex interdisciplinary research projects

The complexity of the challenges facing our societies and economies is increasing and solving them requires large-scale, interdisciplinary and international approaches. Much of the research in the European Union, through  its pillar programmes (H2020 and others), is characterised by large, complex, interdisciplinary, international and inter-sectoral research projects. To be successful, these projects need to be strategically planned, actively managed and quality-assured across many dimensions including diverse stakeholders, sectors, countries, academic disciplines. Internal integration across work packages and deliverables is also critical.

 

Such complex research projects have distinctive characteristics which present challenges for their management including (i) ill-defined outcomes due to the nature of research (ii) the balance between maintaining a strategic focus and ensuring disciplinary integrity (iii) the diverse and temporary team of independent partners based in multiple international locations (iv) the increasing emphasis on interdisciplinary research which brings together researchers from disparate disciplines and methodological backgrounds and (v) the involvement of stakeholders from multiple sectors (higher education, large companies, SMEs, NGOs, government agencies etc.). 

 Leading, well-established, project management methodologies in the commercial sector explicitly include integration of project activities as a core project management activity. Project integration must be managed in the same way as costs, risk, quality and schedule. It is arguably one of the most critical aspects of research project management to ensure the impact of research is exploited fully. 

 However project management methodologies which take into account the challenges of complex research projects are under-developed as is the management of interdisciplinary research projects in general. By using case studies of several major EU-funded research projects, this research aims to contribute towards the development of a comprehensive management framework for the integration of complex interdisciplinary research projects. This framework will be informed by state-of-the-art evidence from four key areas (i) integrating interdisciplinary research (ii) integrating multiple sectors (iii) integrating diverse stakeholders (iv) integrating international research teams.

More information available from Dr Deirdre Lillis

Top


Transient Distributed Supercomputer Clouds using Mobile Devices

Science based data processing requirements have grown from terabytes to petabytes over the last 10 years, with exabyte sized data projects currently under development. The computing power required to process this volume of data primarily relies on high-end supercomputer clusters which are purpose built, expensive to maintain and often inflexible in terms of the service they provide.  Measured in Teraflops, the required processing power of the most powerful parallel supercomputers will struggle to process this impending tsunami of scientific data.  

This research aims to investigate how a global data processing cloud can be constructed which horizontally scales using mobile devices to process scientific data at supercomputer processing rates. This distributed parallel processing supercomputer should be easily created and destroyed, allowing different types of scientific data to be processed as required. This distributed supercomputer would also incorporate security protocols, be resilient to individual node failure, and operate using an economic model  which supported spot price bidding to control processing costs.  

More information available from Dr Paul Doyle

Top


Automated Monitoring of Group Environment

In nursing home and other assistive group environments, situations arise where residents’ standard of care is substandard – such as the verbal or physical abuse of patients by staff or lack of adequate exercise for patients. Initial solutions to external monitoring of these environments suggest use of cameras [1][2] but this poses problems of privacy and manual effort to analyse camera footage.

In such situations, we need solution(s) that deliver real time alerts and/or post incident analysis  - using several streams of information – including visual, auditory and activity information from sensors. The interpretation of this sensor information needs to be correct in identifying abusive incidents and fast, so that such problems are identified as soon as possible.

 For example, auditory signals from a microphone can monitor noise levels in a room.  Audio patterns that are associated with incidents, such as raised voices can be captured and classified using learning algorithms.  Pressure sensors can be used to detect the length of time spent in bed –and so build up an activity profile for a resident – with any change in typical patterns highlighted.  Camera images cannot rely on manual monitoring so require automated image processing to pick up patterns associated with problem incidents.  These problems require investigation and use of image processing and machine learning techniques to develop improved and new algorithms for detection of scenarios of interest.

In summary, the overall purpose of this project is to investigate and develop a set of solutions to assist in automated monitoring of group environments.

More information available from Dr Susan McKeever

Top


POMDPs for Situated Dialogue Systems

Spoken Dialogue Systems are collections of computational components that are combined in intelligent ways to produce artificial interaction partners such as SIRI. The state of the art in Dialogue
Planning Systems are Partially Observable Markov Decision Processes. These probabilistic systems make the best guess of the next thing the agent should say based on the system's current estimate of the dialogue state. This project will look at going beyond the current state of the art by looking at the training of POMDPs in spatially situated applications such as robots and video game characters.

More information available from Dr Robert Ross

Top


Deep Machine Translation for Sign Language

Sign languages are not simply an alternative form of representing a given language, for example US Sign Language is a very different language to US English and is also a different language to UK Sign language. Machine Translation to and from signed languages is a tool which can have significant benefit for deaf members of the community. In this project we will look at the application of state-of-the-art Deep Learning based Machine Translation Systems to the problem of automated sign language translation.

More information available from Dr Robert Ross

Top


Animal Behaviour Modelling & Monitoring 

The agricultural sector is a major economic driver in both Ireland and around the globe. Within the agriculture sector, livestock such as cows, sheep or goats are highly valued assets that require monitoringand frequent care. In this project we will look at the application of activity and state monitoring based on visual information to detect atypical animal states. The major challenge here will be developing models that can easily be deployed and customized to the specifics of a farm or animal production facility. State of the art methods in Deep Learning and Image Processing will be used for this project.

More information available from Dr Robert Ross

Top


Activity Recognition in Care Homes

Many developed countries face a 'care crisis' as a larger and larger section of the population requires care in residential homes while government budgets are stretched to the limits. In this context the use of assisted monitoring systems for homes are a very appealing and potentially cost effective aid in providing good quality care. In this project we help to provide solutions to this problem by investigating the use of neural network based Deep Learning systems in the monitoring of resident activity in care homes. Monitoring will be conducted principally with vision systems as in the long run these are likely to be the most cost effective solution for monitoring.

More information available from Dr Robert Ross

Top


POMDPs linking sensors to Dialogue in Care Homes

Assertive systems for Care Homes should provide appropriate suggestions and care hints to users in order to maximize the quality of life and independence of living of these users. In order for users to embrace assistive technologies it is essential that all suggestions be well timed and appropriate to the situation. Giving two few suggestions risks a person's health, while too many suggestions can cause the user to disengage with the system. In this project we will look at the use of Partially Observable Markov Decision Processes in the construction of dialogue based assistance systems in Care Homes. The focus of this work will be on the use of data fusion and fission process based around a Partially Observable Markov Decision Process.

More information available from Dr Robert Ross

Top


Personality of Assertive Agents in Care Homes

The wide scale acceptance of assistance avatars in venues such as care homes is dependent on the acceptance of the avatar and its assistive
behaviors by the user. In spoken dialogue systems adjustable personality that which can be customized to the needs of the user are essential for this acceptance to take place. in this project we will look at the customization of the state of the art dialogue systems with personality traits so as to accommodate different personality types. This will involve a crossover between empirical results in clinical psychology and statistical dialogue planning processes. We will look in particular at personality models based on the 'Big 5' personality traits and their appropriateness in the modelling of artificial systems.

More information available from Dr Robert Ross

Top


Multilingual Ontology Extraction

Lightweight ontologies and taxonomies provide human and machine understandable descriptions of domains. While ontologies can be manually constructed - and frequently are - the manual construction process is slow and error prone. The automatic and semi-automatic construction of ontologies for specific domains provides a practical balance between thorough manual construction methods and automated methods. The goal of this project is to investigate the human-in-the loop as an expert who can review and improve ontology contents. Rather than focus on universal ontologies, we will examine the use of human-in-the-loop ontology construction in well defined ontology domains such as food types. This project will also look at the difficulties encountered in learning ontologies from free text in under resourced language types.

More information available from Dr Robert Ross

Top


Diseased Cattle Identification from Vision Systems

Low cost sensors are opening up new opportunities for collaborative work between computer scientists and agriculture scientists. One particular problem is the analysis of animal behavior for disease detection in a herd. While much work in this area has concerned the use of customized expensive sensors, in this project we will look at animal activity and state tracking based purely on visual information. A major challenge here will be dealing with the individual characteristics of animals and the need to track multiple animals from a single visual position where animals are viewed as a group. We will look at the use of Deep Learning techniques to automatically select
suitable features from camera data.

More information available from Dr Robert Ross

Top


Deep Learning for Astronomical Data

Astronomers all over are currently producing more data than they can possibly hope to student with current methods. New systems have to be able to cope with exabytes of data. Storage and processing of this data is a significant challenge for astronomers and computer scientists alike. However the state of the art in image classification systems are so-called Deep Learning systems that are highly computational expensive. In this project we will use Deep Learning to learn useful features in identifying interesting objects in astronomy data. The focus of this project will not be on astronomy, but rather the optimization of Deep Learning and Big Data methods for extremely
high volume data. Results of this work are thus expected to be of significant interest to other high volume data projects that make use of Machine Learning and High Performance Computing.

More information available from Dr Robert Ross

Top


VoIP Speech Quality Monitoring

Under normal conditions, sound quality using Voice over Internet Protocol (VoIP) is superior to regular fixed line or mobile phone calls. However the fidelity gains in terms of natural sounding wideband speech can often be out-weighted by cut-outs, delays or echo caused by network or connectivity problems. This project will use machine-learning techniques to develop classification algorithms to monitor VoIP calls for speech quality issues. These algorithms will be combined into a computer model that implements real-time monitoring and prediction of speech quality.

More information available from Dr Andrew Hines

Top


Internet accessibility for hearing impaired users

The channels we use for communication are rapidly evolving, with services such as Skype and Google Hangouts making free worldwide calls possible using Voice over Internet Protocol (VoIP). VoIP is becoming more prevalent on mobile devices, increasing the opportunities for voice and video communication. Increased availability along with improved quality and reliability make VoIP services a viable alternative to traditional telephony.

For hearing impaired users the issue of speech intelligibility and understanding a speaker are a higher priority than the sound quality of the experience. This is equally true of users of hearing aids and cochlear implants. The aim of this project is to use computer models of the auditory periphery to evaluate intelligibility for hearing impaired listeners using VoIP systems. By analysing speech signals with computer based intelligibility and quality metrics. The project will explore new ways of maximising intelligibility and quality of experience for hearing impaired users.

More information available from Dr Andrew Hines

Top


Podcast Quality of Experience

Radio interviews on podcasts often include panellists in different locations, e.g. one panellist is in-studio while another calling in via telephone. While the speech fidelity of the studio DJ conducting the interview is crystal clear, the panellist calling in by phone often sounds muffled. To improve these quality issues, technology called artificial bandwidth extension techniques have been developed for mobile telephony. This project will seek to develop tools that can apply post-processing filters to enhance the audio quality for podcasts using artificial bandwidth extension and room acoustic filters before they are uploaded.

More information available from Dr Andrew Hines

Top


Automated marine stock estimation

Nephrops Novegicus, more commonly known locally as the Dublin Bay Prawn, are big business for the Irish fishing industry. Ensuring a sustainable stock using annual quotas is key this important industry that was valued at almost €50m in 2014. Stock assessment is carried out with annual underwater video surveys across nephrop fishing grounds around the Irish coast. The survey analysis is carried out by expert marine biologists who count the burrows in a survey area to estimate the density of Nephrops for a given habitat. Using historical survey data, this project will use machine learning to estimate burrow density. The project aim is to automate the manual inspection process to predict Nephrops populations. It will involve statistical analysis of large datasets, evaluation and tuning of machine learning models as well as video and image processing. 

More information available from Dr Andrew Hines

Top


Automated Visual Narratives

Narrative has emerged as a key component in how visualisation is practiced on the web. Large media organisations, such as The Guardian and The New York Times, employ visualisation teams to assist in the creation of visual narratives, which have enabled authors reach new audiences and convey large swaths of information in elegant visual displays. Narrative devices, such as maintaining intrigue and using “The Big Reveal”, have emerged as an innovative way to engage passive readers. Given the increasing amounts of data being generated daily, and the broad spectrum of visual literacy for regular web users, there is scope to investigate how visual narratives could be automatically generated from a range of differing datasets. These automated visual narratives must consider the application domain, the role and expertise of the reader while applying narrative structures that generate insightful and interesting data stories. Machine learning techniques, such as Deep Learning, and novel visualisation interfaces will be used to support the creation of visual narratives during this research.

More information available from Dr John McAuley

Top


Semi-automated Visual Analytics for Non-expert users

The emergence of Big Data and the turn to data-driven decision-making has created a need for data analytics to be embedded in every aspect of modern business. In many cases, this has resulted in analytics software being adopted by a less analytically-orientated user who may not have a background in statistics but has the required domain knowledge to make effective business decisions. However, often the sheer size and complexity of data can result in a user relying on surface-level analysis, applying default analytic procedures or representing data incorrectly with visualisation. The current generation of visual analytics tools include some level of chart recommendation, based on the dimensionality and type of data for example, yet there remains significant scope to research how users can be assisted during the visual analytics process. This could include amending the interface automatically to emphasise key characteristics in data, using the history of previous analytic sessions to highlight insights or automatically re-encoding visual variables based on a user’s level of expertise. The aim of this project is to use state-of-the-art machine learning techniques, such as Deep Learning, to investigate how semi-automated visual analytics can be effectively implemented for non-expert users.

More information available from Dr John McAuley

Top


Measuring the impact of interactivity in Visual Analytics

Visual Analytics is the use of visualisation to support analytic reasoning. Over the last decade, Visual Analytics has enabled users with varying degrees of expertise to gain insights into large volumes of data. Interactivity is considered a core component of Visual Analytics, allowing users to think about data from different perspectives, encode variables in new ways or reconfigure visual elements into different visualisation views. Interactivity is increasingly used with animation as a way to illustrate trends or highlight potential correlations. Popular approaches, such as Hans Rosling’s Gap Minder, have caught the public imagination and shown that interactive animation is an effective means of storytelling. However, the impact of interactivity for insight generation is less well understood. This research seeks to investigate this question by asking how interactivity can support the development of insight for users engaged in visual analytics.

More information available from Dr John McAuley

Top


Visual Analytics for Large Digital Libraries

Despite the emergence of digital media, text-based libraries remain the primary means of storing and disseminating knowledge. Although these libraries house huge amounts of data, abstracted to vary degrees of information and knowledge, the primary means of user interaction remains search-based in which a ranked list of candidate resources is returned based on a free-form text search. Although there has been previous research in addressing library visualisation, often the profile of the user has been neglected or the application interface have focused too acutely on the role of exploration. To address this gap, the aim of the project is to research new visual interfaces for large digital libraries. The project will consider advances in the field of visual analytics, when addressing large text corpora with machine learning (Deep Learning) for example, representing the connection between concepts in text documents or using new visual mechanisms to illustrate trends at different conceptual levels. The project will focus on specific user roles, such as librarian, student, professor, and seek to develop novel visual mechanisms and interfaces that support those specific roles. The effectiveness of these novel visual interfaces will then be analysed through experimentation and in-depth user studies.

More information available from Dr John McAuley

Top


Self Adapting Software Defined Networking

Software-Defined Networking (SDN) is an emerging architecture that is dynamic, manageable, cost-effective, and adaptable, making it ideal for the high-bandwidth, dynamic nature of today's applications. This architecture decouples the network control and forwarding functions enabling the network control to become directly programmable and the underlying infrastructure to be abstracted for applications and network services. On the other hand, Self Adaptive Software is a Dynamic and intelligent software that can adapts its behavior & functionality based on the computational environment where they are executed. What if we manage to use an advanced software technology to make SDN Self-organizing, Self-tuning, Self-healing and Self-optimizing in the context of big data, considering the required Quality-of-Service (QoS.

More information available from Dr Basel Magableh

Top


Adaptive Learning and Content Personalisation for Next Generation E-Learning Tools

Technology makes it possible to reach every single student on a personal level, delivering a fine-grained and very personalized piece of content that can enhance the learning experience by capturing and understanding the learner’s behavior, interest, preferences and the learner quality of experience. Using an advanced Machine learning techniques for profiling the learner’s behavior and personalizing the E-learning Content could enables us to implement an adaptive learning software.

More information available from Dr Basel Magableh

Top


Building the Next generation Mobile Operating System. Enhancing and Improving Integrated Natural Language Processing Framework with Deep Machine Learning Algorithms for building Behavioral-aware applications

Smartphones are become the true physical personal assistants. Third-party ‘apps’ provide users with nearly no limits to share user-generated content (i.e. Email, SMS, Chat Messages, Pictures, Audio). A large amount of those content are not processed in the Device locally, due to the limited resources available on Mobile devices. Natural Language Processing (NLP) can help to make apps smarter, dynamic and intelligent by automatically analyzing the meaning of content and taking appropriate actions on behalf of their users. However, due to its complexity, NLP has yet to find widespread adoption in smartphone or tablet applications rather than sending user’s sensitive data to backend service running on the cloud. The question on mind is what if we blend together the output of local NLP framework and Context Awareness for extending software’s smart functionality.

More information available from Dr Basel Magableh

Top


Using Sparse Distributed Memory to transform Human long Term Memory from One Mobile User to Another

Sparse Distributed Memory (SDM) was developed as a mathematical model of human long-term Memory. How we can use SDM to index the Human knowledge and transform this generated knowledge from Human to Human. This corresponds beautifully to how humans and animals with advanced sensory systems and brains work. The signals received by us at two different times are hardly ever identical, and yet we can identify the source of the signal as a specific individual, object, place, scene, or thing. In smaller scale Mobile Devices has less sensory systems and the reaction to those sensory events is pre defined by software Developers. Could we use SDM to index those sensory events and User Generated Content to improve/predict both Mobile Users and software reactions as an optimal response for a specific events and Content. In addition to that, If we manage to build a real time index about the user long-term memory is it possible to transform this knowledge to another users, so they can use it as a source of learning and they will possibility adapt the same optimal reaction!

More information available from Dr Basel Magableh

Top


Automated identification of fake users profiles in online media

The volume of social media content posted by users has grown enormously over the past decade. One of the challenges for social media providers is to validate user profile information against true user profiles. For example, children who claim to be older gain access to site and content by creating users profiles with false age information. Other false information can include gender or location.

This project processes to address this problem by automatically identifying risky user profiles – i.e. by developing a mechanism for analysing user content which does not match the declare user profile. This will involve developing particular activity or content profiles associated with specific user profiles – using machine learning techniques. The output of the project will be of interest to any social media or general media business on the internet. The technologies will include machine learning techniques and text mining.

More information available from Dr Susan McKeever

Top


Sentiment analysis of real-time online content

Every hour, more than 21 million new tweets are posted on Twitter and 300 hours of new video material is uploaded to YouTube. Media companies, advertisers and companies of branded products are amongst those that are keenly interested in the response to newly posted material . This is difficult due to the volume of data posted, the range of topics, and the variety of data sourced involved. It requires real time clustering of information, topics, channels – combined with analysis of the overall sentiment of discussion around any one of these. This project aims to develop automated techniques to monitor the sentiment of topics within high volume streams social media data in real time. The techniques involved include machine learning and text analysis.

More information available from Dr Susan McKeever

Top


Detection of abusive content in social media in a multi lingual environment

There has been enormous growth in the volume of user content posted to social media such as Twitter, Instagram , Facebook and YouTube. A major challenge for this media is to monitor and prevent the posting of abusive user content, be this bullying content or other types of abusive text content. Research work in this area is ongoing. A particular gap, however, is the detection of content across multiple languages, as the tendency is to focus on English. This project looks at addressing both the detection of abusive content, and the challenge of doing this while taking account of multiple languages. The techniques that will be used on the project include machine learning for building automated modules for categorising content – and text mining for the parsing and analysis of text.

More information available from Dr Susan McKeever

Top


Automated classification of internet video content

There has been enormous growth in the volume of video material posted on the internet for public and private consumption. For example, 300 hours of new video footage is uploaded to YouTube every minute [1]. A major challenge for businesses is to process the volume of uploaded video content. The video content needs to be classified into safe versus abusive content – and further into genres such as news, sports, comedy and education. Current methods focus heavily on visual and /or text content of video, with less focus on including embedded audio content [2]. This project initially proposes to use an audio-led machine learning approach to classification, based on the premise that the audio content of a digital audio-visual segment will provide rich information for classification. We will enhance our approach through exploring the latest techniques in image content for building classification features. The student will have the freedom to explore new mechanisms for improving video classification results.

More information available from Dr Susan McKeever

Top


Investigating the role of cognitive load modelling for enhancing user experience (UX) assessment

Human mental workload (MWL) has gained importance, in the last few decades, as a fundamental design concept in human-computer interaction (HCI). At an early system design phase, designers require some explicit model to predict the mental workload imposed by their technologies on end-users so that alternative system designs can be evaluated. MWL can be intuitively defined as the amount of mental work necessary for a person to complete a task over a given period of time. However, this is a simplistic view because MWL is a multifaceted and complex construct with a plethora of ad-hoc definitions. 
Although measuring MWL has advantages in interaction/interface design, his impact to user experience (UX) has not been sufficiently studied. This project is focused on the application of the construct of human mental workload (MWL) in human-computer Interaction and User Experience (UX) employing knowledge discovery, data mining (KDD) techniques as well as machine learning (ML) and other data analytical techniques borrowed from Artificial Intelligence (AI).

More information available from Dr Luca Longo

Top


Assessing cognitive load of Human-web tasks with objective indicators of online user activity

Human mental workload (MWL) has gained importance, in the last few decades, as a fundamental design concept in human-computer interaction (HCI). At an early system design phase, designers require some explicit model to predict the mental workload imposed by their technologies on end-users so that alternative system designs can be evaluated. MWL can be intuitively defined as the amount of mental work necessary for a person to complete a task over a given period of time. However, this is a simplistic view because MWL is a multifaceted and complex construct with a plethora of ad-hoc definitions.
The concept of MWL has been mainly applied in psychology, ergonomics (human factors) and subjective MWL assessment techniques have been mainly used for its assessment.  This PhD project is focused on the design, development evaluation of a framework for automatically capturing user activity over interacting systems (eg. Computers, mobiles and web-pages) and automatically inferring their MWL based on this activity, without requiring explicit subjective feedback from users (eg. interviews, surveys). Beside the development of the framework (software), knowledge discovery, data mining (KDD) techniques as well as machine learning (ML) classifiers are planned to be adopted for the evaluation of such a framework.

More information available from Dr Luca Longo

Top


High-Security, High-Performance Software Defined Networks

Software Defined Networks (SDNs) will address the needs for flexibility and speed in meeting increased bandwidth requirements. SDNs are part of the evolution toward “X” as a Service (XaaS).  With the rapid integration of SDNs, a paradigm shift in network infrastructure is occurring. No longer are infrastructural works confined to fixed routes controlled by hardware. Instead, SDNs provide capabilities to redefine the network using software abstraction. SDNs are now doing in software what could be previously only managed by hardware (routing, switching, forwarding) with the main advantages of being quick to re-design using inexpensive hardware. They have the ability to provide a quick mechanism to fix security issues such (as the Heartbleed bug) as well as rapidly responding to new routing configurations as the network evolves. This can be achieved by making use of a feature known as “experimental protocol” support. In addition to this, major technology companies (such as Oracle, Cisco and Google) have acquired SDN technology and entrusted it to manage its infrastructure and data warehouses. Their ability to reconfigure makes them well suited for power and resource management of large scale virtual systems in data warehouses. SDNs introduce a number of security vulnerabilities across its platform which are not present in traditional networks. The centralised control mechanism at the heart of to SDNs means that these issues can have catastrophic effects. This could lead to manipulation of the SDN controller itself or actual traffic through the SDN controller which would allows an attacker to gain full control of a network topology. In essence, attackers can shift their focus from the host to the entire network. By identifying the vulnerabilities of the SDN platform a solution can be implemented to prevent security attacks. Furthermore, rich sources of dynamic social media data can be used to predict vulnerabilities before they have been identified. Any such implementation to address SDN issues should be efficient so that they do not impact heavily on network performance. The main challenge is to design the system which trades-off security against performance to provide sufficient efficiency. The main areas of research we are considering are SDN Security, SDN Routing, SDN Quality of Service,  SDN Power management.

More information available from Dr Brian Keegan

Top


Developing STEM Teaching Methods Through Modular Design

The aim of this project is to develop STEM/science capital through the use of a modular based training and teaching system for primary education (age 4 - age 12). Science capital refers to science-related qualifications, understanding, knowledge (about science and “how it works”), interest and social contacts (e.g. knowing someone who works in a science-related job). There is evidence that children who do not engage with STEM disciplines before the age of 8 are unlikely to engage with STEM later on or choose a career path in STEM. While there are many educational programs to support STEM at secondary level, we believe that an approach to encourage early childhood engagement which also provides support for families would be largely beneficial. The aim of this research is to develop bespoke modular teaching which can be adapted in line with technology advancements. Preliminary consultations with primary schools have identified a number of aspects to teaching STEM in the classroom which prove challenging. A major part of this not just the content but how the content is delivered. Practical problem based learning of STEM related subjects are often too abstract for young children to work unaccompanied. Current trends in IT based assistive technology are indicating a proliferation of smart speaker systems (Google Home, Amazon Echo etc). Such devices may be utilised as learning support in classrooms to aid in providing information to small groups of children whilst at the same time integrating technology.

The main deliverable from this research would be to provide a detailed and comprehensive study of primary school children's ability in STEM and to design and implement a STEM teaching pedagogy which is highly adaptable and accessible to teachers and parents.

More information available from Dr Brian Keegan or Dr Cathy Ennis

Top