I am a doctoral student and a computer science and machine learning researcher at the Human Experience Research Lab at the University of Florida where I work on solving social good problems using ground breaking machine learning and data mining techniques. In particular, my desire is to resolve the issues pertaining to Fairness, Accountability, Transparency and Efficiency of AI solutions. Address: Gainesville, United States
Many ML models are opaque to humans, producing decisions too complex for humans to easily underst... more Many ML models are opaque to humans, producing decisions too complex for humans to easily understand. In response, explainable artificial intelligence (XAI) tools that analyze the inner workings of a model have been created. Despite these tools' strength in translating model behavior, critiques have raised concerns about the impact of XAI tools as a tool for `fairwashing` by misleading users into trusting biased or incorrect models. In this paper, we created a framework for evaluating explainable AI tools with respect to their capabilities for detecting and addressing issues of bias and fairness as well as their capacity to communicate these results to their users clearly. We found that despite their capabilities in simplifying and explaining model behavior, many prominent XAI tools lack features that could be critical in detecting bias. Developers can use our framework to suggest modifications needed in their toolkits to reduce issues likes fairwashing.
This chapter will focus on the concept of artificial intelligence (AI) and scrutinize whether AI ... more This chapter will focus on the concept of artificial intelligence (AI) and scrutinize whether AI is used to reinforce the social construct of racism or if it promises to provide opportunity and justice in areas where previously they lacked. Throughout this chapter, common examples of the everyday use of AI will be discussed to provide an understanding of how this technology has permeated through multiple sectors of life with the intention to continuously expand. This chapter will also explore the notion of AI inheriting the bias of its creator. This brings into question who is building this technology, how this technology should be used, and who is the intended beneficiary of its use. For the experts and novices in the field, this chapter calls for critical consideration for what is being developed. For the individuals unfamiliar with AI, this chapter is intended to bring awareness of the technology being used around them and how it has already impacted their life.
Machine Learning has become a popular tool in a variety of applications in criminal justice, incl... more Machine Learning has become a popular tool in a variety of applications in criminal justice, including sentencing and policing. Media has brought attention to the possibility of predictive policing systems causing disparate impacts and exacerbating social injustices. However, there is little academic research on the importance of fairness in machine learning applications in policing. Although prior research has shown that machine learning models can handle some tasks efficiently, they are susceptible to replicating systemic bias of previous human decision-makers. While there is much research on fair machine learning in general, there is a need to investigate fair machine learning techniques as they pertain to the predictive policing. Therefore, we evaluate the existing publications in the field of fairness in machine learning and predictive policing to arrive at a set of standards for fair predictive policing. We also review the evaluations of ML applications in the area of criminal...
Conversational Voice User Interfaces (VUIs) help us in performing tasks in a wide range of domain... more Conversational Voice User Interfaces (VUIs) help us in performing tasks in a wide range of domains these days. While there have been several efforts around designing dialogue systems and conversation flows, little information is available about technical concepts to extract critical information for addressing the users’ needs. For conversational VUIs to function appropriately as a decision aid, artificial intelligence (AI) that recognizes and supports diverse user decision strategies is a critical need. Following the design principle proposed by Kwon et al. [1] regarding the conversational flow between the user and conversational VUI, we developed an AI-based mobile-decision-aid (MODA) that predictively models and addresses users’ decision strategies to facilitate users’ in-store shopping decision process. In this paper, technical details about how MODA processes users’ natural language queries and generate the most appropriate and intelligent recommendations have been discussed. Th...
Dynamicity and infrastructure-less nature of MANETs expose the routing in such networks to a vari... more Dynamicity and infrastructure-less nature of MANETs expose the routing in such networks to a variety of attacks, and moreover, make the conventional fixed policy routing algorithms inefficient. To deal with the routing challenges and varying behavior of malicious nodes in such networks, employing reinforcement learning algorithms and proper trust models seem promising. In this paper, we introduce a cognition layer in parallel and interacting with the network layer which comprises two cognitive processes: path learning (routing) and trust learning. The first process is based on machine learning algorithms and the latter is based on trust management. We compare our algorithm, TQOR, with a well known trust-based routing protocol, TQR, in terms of three measures of performance. The simulation results show better end-to-end delay and communication overhead which further improve as time progresses, without sacrificing the data packet delivery ratio.
Advances in Intelligent Systems and Computing, 2018
In 1955, Solomon Asch investigated the relationship between opinions and social pressure. His res... more In 1955, Solomon Asch investigated the relationship between opinions and social pressure. His results showed that opinions can be influenced by social pressure. In security, users are often referred to as the biggest security threat to a system. A system administrator's security efforts can be, and have been, weakened by a user's poor security decisions. Exploring conformity in security can help to discover if conformity exists in this setting and help to remedy the issue by determining why. This study repeats Asch's experiment to discover if opinions can be influenced by social pressure in security related circumstances. This experiment differs from Asch's because it takes place in a virtual room, the confederates are virtual beings, and the questions are related to passwords. This document will present the results of our experiment which will include the percentage of participants that conform, and the results from the interview after each session.
Many ML models are opaque to humans, producing decisions too complex for humans to easily underst... more Many ML models are opaque to humans, producing decisions too complex for humans to easily understand. In response, explainable artificial intelligence (XAI) tools that analyze the inner workings of a model have been created. Despite these tools’ strength in translating model behavior, critiques have raised concerns about the impact of XAI tools as a tool for ‘fairwashing‘ by misleading users into trusting biased or incorrect models. In this paper, we created a framework for evaluating explainable AI tools with respect to their capabilities for detecting and addressing issues of bias and fairness as well as their capacity to communicate these results to their users clearly. We found that despite their capabilities in simplifying and explaining model behavior, many prominent XAI tools lack features that could be critical in detecting bias. Developers can use our framework to suggest modifications needed in their toolkits to reduce issues likes fairwashing.
2020 IEEE International Symposium on Technology and Society (ISTAS)
Rural communities are more dependent on transportation than their urban peers. Transportation wit... more Rural communities are more dependent on transportation than their urban peers. Transportation within rural areas is necessary to access food, healthcare, educational opportunities, and employment, especially since rural residents have longer distances to travel to access them. Therefore, the availability of efficient and affordable transportation can lead to economic growth in rural areas and ensure that people can obtain the services they need. Autonomous Vehicles (AV) can improve accessibility and mobility in these communities. However, research that discusses autonomous vehicles' impact mostly focuses on urban transportation with limited focus on rural areas. In this paper, the barriers to adopting autonomous vehicles in rural areas are discussed by examining the current struggles of rural communities concerning finance, transportation infrastructure, policy, and demographics. First, we suggest that companies that design and create autonomous vehicles investigate efficient ways to ensure they are affordable to rural communities. Additionally, when using AVs for ridesharing, it must be ensured that rural communities have the financial means to use them. Second, companies must be attentive to these vehicles' accessibility to ensure that all individuals, including disabled and older adults, can use them without difficulty in rural communities. To maximize these vehicles' accessibility, they need to embrace the community in the design and policy-making processes. Lastly, any technologies used in these vehicles, such as facial recognition systems, need to address the potential of bias against minorities in rural communities to avoid the future disparate impacts and injustice toward particular groups of people in the society.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
When making routine and critical purchase decisions, consumers often have a need to process a sur... more When making routine and critical purchase decisions, consumers often have a need to process a surplus of information to make the right choice. Today’s technology must be able to assist them in this process. Although conversational voice user interfaces have the potential to help consumers in their decision-making, extensive testing is required to ensure that they are up to par with the expectations and the needs of users and contexts. Therefore, we focus on evaluating the ability of a multi-strategy conversational mobile decision-aid (MODA) (Alikhademi et al., in press) in correctly classifying the decision-making strategies used by consumers and recognizing attributes, brands, and criteria voiced in an air filter purchase context. Our system evaluation results revealed that MODA performed with high levels of accuracy with classifying the user’s decision-making strategy (over 80%) and recognizing decision parameters (over 75%). The main contribution of MODA is that it can support us...
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Over the past 20 years, researchers have investigated the potential of Virtual Reality (VR) to en... more Over the past 20 years, researchers have investigated the potential of Virtual Reality (VR) to enhance rehabilitative therapies by improving motor control, supporting motivation, and offering analgesic effects. Prior work indicates that patient adherence to prescribed in-home regimens has significant impact on recovery time. Though Connected Health Technologies and Virtual and Augmented Reality (AR/VR) may maximize in-home adherence and recovery, questions about design and deployment remain. We designed a first-person Augmented Reality (AR) experience to elicit user and practitioner perspectives about AR for rehabilitative contexts. We found significant differences between patient and practitioner-report of regimen adherence. We also identified key attitude barriers to adopting VR/AR for clinical practice which may impact support for in-home VR/AR use. Findings from these studies inform directions for future research and development about the use of VR/AR in a therapeutic context.
Many ML models are opaque to humans, producing decisions too complex for humans to easily underst... more Many ML models are opaque to humans, producing decisions too complex for humans to easily understand. In response, explainable artificial intelligence (XAI) tools that analyze the inner workings of a model have been created. Despite these tools' strength in translating model behavior, critiques have raised concerns about the impact of XAI tools as a tool for `fairwashing` by misleading users into trusting biased or incorrect models. In this paper, we created a framework for evaluating explainable AI tools with respect to their capabilities for detecting and addressing issues of bias and fairness as well as their capacity to communicate these results to their users clearly. We found that despite their capabilities in simplifying and explaining model behavior, many prominent XAI tools lack features that could be critical in detecting bias. Developers can use our framework to suggest modifications needed in their toolkits to reduce issues likes fairwashing.
This chapter will focus on the concept of artificial intelligence (AI) and scrutinize whether AI ... more This chapter will focus on the concept of artificial intelligence (AI) and scrutinize whether AI is used to reinforce the social construct of racism or if it promises to provide opportunity and justice in areas where previously they lacked. Throughout this chapter, common examples of the everyday use of AI will be discussed to provide an understanding of how this technology has permeated through multiple sectors of life with the intention to continuously expand. This chapter will also explore the notion of AI inheriting the bias of its creator. This brings into question who is building this technology, how this technology should be used, and who is the intended beneficiary of its use. For the experts and novices in the field, this chapter calls for critical consideration for what is being developed. For the individuals unfamiliar with AI, this chapter is intended to bring awareness of the technology being used around them and how it has already impacted their life.
Machine Learning has become a popular tool in a variety of applications in criminal justice, incl... more Machine Learning has become a popular tool in a variety of applications in criminal justice, including sentencing and policing. Media has brought attention to the possibility of predictive policing systems causing disparate impacts and exacerbating social injustices. However, there is little academic research on the importance of fairness in machine learning applications in policing. Although prior research has shown that machine learning models can handle some tasks efficiently, they are susceptible to replicating systemic bias of previous human decision-makers. While there is much research on fair machine learning in general, there is a need to investigate fair machine learning techniques as they pertain to the predictive policing. Therefore, we evaluate the existing publications in the field of fairness in machine learning and predictive policing to arrive at a set of standards for fair predictive policing. We also review the evaluations of ML applications in the area of criminal...
Conversational Voice User Interfaces (VUIs) help us in performing tasks in a wide range of domain... more Conversational Voice User Interfaces (VUIs) help us in performing tasks in a wide range of domains these days. While there have been several efforts around designing dialogue systems and conversation flows, little information is available about technical concepts to extract critical information for addressing the users’ needs. For conversational VUIs to function appropriately as a decision aid, artificial intelligence (AI) that recognizes and supports diverse user decision strategies is a critical need. Following the design principle proposed by Kwon et al. [1] regarding the conversational flow between the user and conversational VUI, we developed an AI-based mobile-decision-aid (MODA) that predictively models and addresses users’ decision strategies to facilitate users’ in-store shopping decision process. In this paper, technical details about how MODA processes users’ natural language queries and generate the most appropriate and intelligent recommendations have been discussed. Th...
Dynamicity and infrastructure-less nature of MANETs expose the routing in such networks to a vari... more Dynamicity and infrastructure-less nature of MANETs expose the routing in such networks to a variety of attacks, and moreover, make the conventional fixed policy routing algorithms inefficient. To deal with the routing challenges and varying behavior of malicious nodes in such networks, employing reinforcement learning algorithms and proper trust models seem promising. In this paper, we introduce a cognition layer in parallel and interacting with the network layer which comprises two cognitive processes: path learning (routing) and trust learning. The first process is based on machine learning algorithms and the latter is based on trust management. We compare our algorithm, TQOR, with a well known trust-based routing protocol, TQR, in terms of three measures of performance. The simulation results show better end-to-end delay and communication overhead which further improve as time progresses, without sacrificing the data packet delivery ratio.
Advances in Intelligent Systems and Computing, 2018
In 1955, Solomon Asch investigated the relationship between opinions and social pressure. His res... more In 1955, Solomon Asch investigated the relationship between opinions and social pressure. His results showed that opinions can be influenced by social pressure. In security, users are often referred to as the biggest security threat to a system. A system administrator's security efforts can be, and have been, weakened by a user's poor security decisions. Exploring conformity in security can help to discover if conformity exists in this setting and help to remedy the issue by determining why. This study repeats Asch's experiment to discover if opinions can be influenced by social pressure in security related circumstances. This experiment differs from Asch's because it takes place in a virtual room, the confederates are virtual beings, and the questions are related to passwords. This document will present the results of our experiment which will include the percentage of participants that conform, and the results from the interview after each session.
Many ML models are opaque to humans, producing decisions too complex for humans to easily underst... more Many ML models are opaque to humans, producing decisions too complex for humans to easily understand. In response, explainable artificial intelligence (XAI) tools that analyze the inner workings of a model have been created. Despite these tools’ strength in translating model behavior, critiques have raised concerns about the impact of XAI tools as a tool for ‘fairwashing‘ by misleading users into trusting biased or incorrect models. In this paper, we created a framework for evaluating explainable AI tools with respect to their capabilities for detecting and addressing issues of bias and fairness as well as their capacity to communicate these results to their users clearly. We found that despite their capabilities in simplifying and explaining model behavior, many prominent XAI tools lack features that could be critical in detecting bias. Developers can use our framework to suggest modifications needed in their toolkits to reduce issues likes fairwashing.
2020 IEEE International Symposium on Technology and Society (ISTAS)
Rural communities are more dependent on transportation than their urban peers. Transportation wit... more Rural communities are more dependent on transportation than their urban peers. Transportation within rural areas is necessary to access food, healthcare, educational opportunities, and employment, especially since rural residents have longer distances to travel to access them. Therefore, the availability of efficient and affordable transportation can lead to economic growth in rural areas and ensure that people can obtain the services they need. Autonomous Vehicles (AV) can improve accessibility and mobility in these communities. However, research that discusses autonomous vehicles' impact mostly focuses on urban transportation with limited focus on rural areas. In this paper, the barriers to adopting autonomous vehicles in rural areas are discussed by examining the current struggles of rural communities concerning finance, transportation infrastructure, policy, and demographics. First, we suggest that companies that design and create autonomous vehicles investigate efficient ways to ensure they are affordable to rural communities. Additionally, when using AVs for ridesharing, it must be ensured that rural communities have the financial means to use them. Second, companies must be attentive to these vehicles' accessibility to ensure that all individuals, including disabled and older adults, can use them without difficulty in rural communities. To maximize these vehicles' accessibility, they need to embrace the community in the design and policy-making processes. Lastly, any technologies used in these vehicles, such as facial recognition systems, need to address the potential of bias against minorities in rural communities to avoid the future disparate impacts and injustice toward particular groups of people in the society.
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
When making routine and critical purchase decisions, consumers often have a need to process a sur... more When making routine and critical purchase decisions, consumers often have a need to process a surplus of information to make the right choice. Today’s technology must be able to assist them in this process. Although conversational voice user interfaces have the potential to help consumers in their decision-making, extensive testing is required to ensure that they are up to par with the expectations and the needs of users and contexts. Therefore, we focus on evaluating the ability of a multi-strategy conversational mobile decision-aid (MODA) (Alikhademi et al., in press) in correctly classifying the decision-making strategies used by consumers and recognizing attributes, brands, and criteria voiced in an air filter purchase context. Our system evaluation results revealed that MODA performed with high levels of accuracy with classifying the user’s decision-making strategy (over 80%) and recognizing decision parameters (over 75%). The main contribution of MODA is that it can support us...
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Over the past 20 years, researchers have investigated the potential of Virtual Reality (VR) to en... more Over the past 20 years, researchers have investigated the potential of Virtual Reality (VR) to enhance rehabilitative therapies by improving motor control, supporting motivation, and offering analgesic effects. Prior work indicates that patient adherence to prescribed in-home regimens has significant impact on recovery time. Though Connected Health Technologies and Virtual and Augmented Reality (AR/VR) may maximize in-home adherence and recovery, questions about design and deployment remain. We designed a first-person Augmented Reality (AR) experience to elicit user and practitioner perspectives about AR for rehabilitative contexts. We found significant differences between patient and practitioner-report of regimen adherence. We also identified key attitude barriers to adopting VR/AR for clinical practice which may impact support for in-home VR/AR use. Findings from these studies inform directions for future research and development about the use of VR/AR in a therapeutic context.
Uploads
Papers by Kiana Alikhademi