Presented at the 18th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2016).
The recent miniaturization of cameras has enabled finger-based reading approaches that provide blind and visually impaired readers with access to printed materials. Compared to handheld text scanners such as mobile phone applications, mounting a tiny camera on the user’s own finger has the potential to mitigate camera framing issues, enable a blind reader to better understand the spatial layout of a document, and provide better control over reading pace. A finger-based approach, however, also introduces the need to guide the reader in physically navigating a document, such as tracing along lines of text. While previous work has proposed audio and haptic directional finger guidance for this purpose, user studies of finger- based reading have not provided an in-depth performance analysis of the finger-based reading process. To further investigate the effectiveness of finger-based sensing and feedback for reading printed text, we conducted a controlled lab experiment with 19 blind participants, comparing audio and haptic directional finger guidance within an iPad-based testbed. As a small follow-up, we asked four of those participants to return and provide feedback on a preliminary wearable prototype called HandSight. Findings from the controlled experiment show similar performance between haptic and audio directional guidance, although audio may offer an accuracy advantage for tracing lines of text. Subjective feedback also highlights tradeoffs between the two types of guidance, such as the interference of audio guidance with speech output and the potential for desensitization to haptic guidance. While several participants appreciated the direct access to layout information provided by finger-based exploration, important concerns also arose about ease of use and the amount of concentration required. We close with a discussion on the effectiveness of finger-based reading for blind users and potential design improvements to the HandSight prototype.
Complementary interstellar detections from the heliotailSérgio Sacani
The heliosphere is a protective shield around the solar system created by the Sun’s interaction with the local interstellar medium (LISM) through the solar wind, transients, and interplanetary magnetic field. The shape of the heliosphere is directly linked with interactions with the surrounding LISM, in turn affecting the space environment within the heliosphere. Understanding the shape of the heliosphere, the LISM properties, and their interactions is critical for understanding the impacts within the solar system and for understanding other astrospheres. Understanding the shape of the heliosphere requires an understanding of the heliotail, as the shape is highly dependent upon the heliotail and its LISM interactions. The heliotail additionally presents an opportunity for more direct in situ measurement of interstellar particles from within the heliosphere, given the likelihood of magnetic reconnection and turbulent mixing between the LISM and the heliotail. Measurements in the heliotail should be made of pickup ions, energetic neutral atoms, low energy neutrals, and cosmic rays, as well as interstellar ions that may be injected into the heliosphere through processes such as magnetic reconnection, which can create a direct magnetic link from the LISM into the heliosphere. The Interstellar Probe mission is an ideal opportunity for measurement either along a trajectory passing through the heliotail, via the flank, or by use of a pair of spacecraft that explore the heliosphere both tailward and noseward to yield a more complete picture of the shape of the heliosphere and to help us better understand its interactions with the LISM.
Synopsis: Analysis of a Metallic SpecimenSérgio Sacani
The All-Domain Anomaly Resolution Office (AARO) sponsored a series of measurements on a layered material
specimen primarily composed of magnesium and zinc, with bands of bismuth and other co-located trace elements.
The material specimen, whose origin and purpose are of long and debated history, is claimed to be recovered
from an unidentified anomalous phenomenon (UAP) crash in or around 1947. Furthermore, the specimen’s
physiochemical properties are claimed to make the material capable of “inertial mass reduction” (i.e., levitation or
antigravity functionality), possibly attributable to the material’s bismuth and magnesium layers acting as a terahertz
waveguide
Possible Anthropogenic Contributions to the LAMP-observed Surficial Icy Regol...Sérgio Sacani
This work assesses the potential of midsized and large human landing systems to deliver water from their exhaust
plumes to cold traps within lunar polar craters. It has been estimated that a total of between 2 and 60 T of surficial
water was sensed by the Lunar Reconnaissance Orbiter Lyman Alpha Mapping Project on the floors of the larger
permanently shadowed south polar craters. This intrinsic surficial water sensed in the far-ultraviolet is thought to be
in the form of a 0.3%–2% icy regolith in the top few hundred nanometers of the surface. We find that the six past
Apollo Lunar Module midlatitude landings could contribute no more than 0.36 T of water mass to this existing,
intrinsic surficial water in permanently shadowed regions (PSRs). However, we find that the Starship landing
plume has the potential, in some cases, to deliver over 10 T of water to the PSRs, which is a substantial fraction
(possibly >20%) of the existing intrinsic surficial water mass. This anthropogenic contribution could possibly
overlay and mix with the naturally occurring icy regolith at the uppermost surface. A possible consequence is that
the origin of the intrinsic surficial icy regolith, which is still undetermined, could be lost as it mixes with the
extrinsic anthropogenic contribution. We suggest that existing and future orbital and landed assets be used to
examine the effect of polar landers on the cold traps within PSRs
In recent years, the growth of scientific data and the increasing need for data sharing and collaboration in the field of environmental chemistry has led to the creation of various software and databases that facilitate research and development into the safety and toxicity of chemicals. The US-EPA Center for Computational Toxicology and Exposure has been developing software and databases that serve the chemistry community for many years. This presentation will focus on several web-based software applications which have been developed at the USEPA and made available to the community. While the primary software application from the Center is the CompTox Chemicals Dashboard which provides access to data for >1.2 million chemicals (https://comptox.epa.gov/dashboard), almost a dozen proof-of-concept applications have been built serving various capabilities. The publicly accessible proof-of-concept Cheminformatics Modules (https://www.epa.gov/chemicalresearch/cheminformatics) provides access to multiple applications in development allowing for hazard comparison for sets of chemicals, structure-substructure-similarity searching, structure alerts and batch QSAR prediction of both physicochemical and toxicity endpoints. A number of other applications, presently in development but not publicly accessible will also be discussed. These include AMOS, the database of Analytical Methods and Open Spectra.
Analytical methods can vary in nature from detailed regulatory methods to more summary in nature. Regulatory method documents can include details of analytes which can be studied, supported matrices, reagents, methodological details, statistical performance, interlaboratory validation and other details. Summary methods provide a general overview of reagents, instrumentation and commonly a short list of analytes. Regulatory bodies including the US Environmental Protection Agency (US-EPA), US Geological Survey (USGS), US Department of Agriculture (USDA) and others provide detailed analytical methods and collections of summary methods from the agrochemical industry, such as the US-EPA Environmental Chemistry Methods (https://www.epa.gov/pesticide-analytical-methods/environmental-chemistry-methods-ecm). Instrument vendors also provide access to many hundreds of application notes which can be considered as summary methods. AMOS presently contains >4,500 methods integrated to their chemical structures and > 230,000 public domain mass spectral data. AMOS allows for filtering of methods based on analyte, chemical class, method source and other related metadata. AMOS is an important facet of the developing Non-Targeted Analysis WebApp presently also in development at the EPA.
This presentation will provide an overview of existing publicly accessible Dashboards and work in progress to support analysis of pesticides, veterinary drug residues, and other chemicals in food, animal feed, and environmental samples.
Testing the Son of God Hypothesis (Jesus Christ)Robert Luk
Instead of answering the God hypothesis, we investigate the Son of God hypothesis. We developed our own methodology to deal with existential statements instead of universal statements unlike science. We discuss the existence of the supernaturals and found that there are strong evidence for it. Given that supernatural exists, we report on miracles investigated in the past related to the Son of God. A Bayesian methodology is used to calculate the combined degree of belief of the Son of God Hypothesis. We also report the testing of occurrences of words/numbers in the Bible to suggest the likelihood of some special numbers occurring, supporting the Son of God Hypothesis. We also have a table showing the past occurrences of miracles in hundred year periods for about 1000 years. Miracles that we have looked at include Shroud of Turin, Eucharistic Miracles, Marian Apparitions, Incorruptible Corpses, etc.
PART 1 & PART 2 The New Natural Principles of Newtonian Mechanics, Electromec...Thane Heins
PART 1
The New Natural Principles of Newtonian Mechanics, Electromechanics, Electrodynamics, Electromagnetism and Electromagnetic Field Energy
PART 2
How Electromagnetic Field Energy is Created and
Destroyed (absorbed)
in a Current Carrying Conductor
A Strong He II λ1640 Emitter with an Extremely Blue UV Spectral Slope at z=8....Sérgio Sacani
Cosmic hydrogen reionization and cosmic production of the first metals are major phase transitions of the Universe
occurring during the first billion years after the Big Bang; however, these are still underexplored observationally.
Using the JWST/NIRSpec prism spectroscopy, we report the discovery of a sub-L* galaxy at zspec =
8.1623 ± 0.0007, dubbed RX J2129–z8He II, via the detection of a series of strong rest-frame UV/optical nebular
emission lines and the clear Lyman break. RX J2129–z8He II shows a pronounced UV continuum with an
extremely steep (i.e., blue) spectral slope of 2.53 0.07
0.06 b = - -
+ , the steepest among all spectroscopically confirmed
galaxies at zspec 7, in support of its very hard ionizing spectrum that could lead to a significant leakage of its
ionizing flux. Therefore, RX J2129–z8He II is representative of the key galaxy population driving the cosmic
reionization. More importantly, we detect a strong He II λ1640 emission line in its spectrum, one of the highest
redshifts at which such a line is robustly detected. Its high rest-frame equivalent width (EW = 21 ± 4 Å) and
extreme flux ratios with respect to UV metal and Balmer lines raise the possibility that part of RX J2129–z8He II’s
stellar population could be Pop III (Pop III)-like. Through careful photoionization modeling, we show that the
physically calibrated phenomenological models of the ionizing spectra of Pop III stars with strong mass loss can
successfully reproduce the emission line flux ratios observed in RX J2129–z8He II. Assuming the Eddington limit,
the total mass of the Pop III stars within this system is estimated to be 7.8 ± 1.4 × 105 Me. To date, this galaxy
presents the most compelling case in the early Universe where trace Pop III stars might coexist with metal-enriched
populations.
All-domain Anomaly Resolution Office Supplement to Oak Ridge National Laborat...Sérgio Sacani
In 2022, The All-domain Anomaly Resolution Office (AARO) contracted with Oak Ridge
National Laboratory (ORNL) to conduct materials testing on a magnesium (Mg) alloy specimen.
This specimen has been publicly alleged to be a component recovered from a crashed
extraterrestrial vehicle in 1947, and purportedly exhibits extraordinary properties, such as
functioning as a terahertz waveguide to generate antigravity capabilities. In April 2024, ORNL
produced a summary of findings documenting the laboratory’s methodology to assess this
specimen’s elemental and structural characteristics, available on AARO’s website.
ORNL assessed this specimen to be terrestrial in origin and that it does not meet the theoretical
requirements to function as a terahertz (THz) waveguide. AARO concurs with ORNL’s
assessment and provides this supplementary material to add historical context to account for its
likely origin. The specimen’s characteristics are consistent with Mg alloy research and
development projects and experimental manufacturing methods in the mid-20th century.
A mature quasar at cosmic dawn revealed by JWST rest-frame infrared spectroscopySérgio Sacani
The rapid assembly of the first supermassive black holes is an enduring mystery. Until now, it was not known whether quasar ‘feeding’ structures (the ‘hot torus’) could assemble as fast as the smaller-scale quasar structures. We present JWST/MRS (rest-frame infrared) spectroscopic observations of the quasar J1120+0641 at z = 7.0848 (well within the epoch of reionization). The hot torus dust was clearly detected at λrest ≃ 1.3 μm, with a black-body temperature of
K, slightly elevated compared to similarly luminous quasars at lower redshifts. Importantly, the supermassive black hole mass of J1120+0641 based on the Hα line (accessible only with JWST), MBH = 1.52 ± 0.17 × 109 M⊙, is in good agreement with previous ground-based rest-frame ultraviolet Mg II measurements. Comparing the ratios of the Hα, Paα and Paβ emission lines to predictions from a simple one-phase Cloudy model, we find that they are consistent with originating from a common broad-line region with physical parameters that are consistent with lower-redshift quasars. Together, this implies that J1120+0641’s accretion structures must have assembled very quickly, as they appear fully ‘mature’ less than 760 Myr after the Big Bang.
A NICER VIEW OF THE NEAREST AND BRIGHTEST MILLISECOND PULSAR: PSR J0437−4715Sérgio Sacani
We report Bayesian inference of the mass, radius and hot X-ray emitting region properties - using data
from the Neutron Star Interior Composition ExploreR (NICER) - for the brightest rotation-powered
millisecond X-ray pulsar PSR J0437−4715. Our modeling is conditional on informative tight priors
on mass, distance and binary inclination obtained from radio pulsar timing using the Parkes Pulsar
Timing Array (PPTA) (Reardon et al. 2024), and we use NICER background models to constrain
the non-source background, cross-checking with data from XMM-Newton. We assume two distinct
hot emitting regions, and various parameterized hot region geometries that are defined in terms of
overlapping circles; while simplified, these capture many of the possibilities suggested by detailed
modeling of return current heating. For the preferred model identified by our analysis we infer a mass
of M = 1.418 ± 0.037 M⊙ (largely informed by the PPTA mass prior) and an equatorial radius of
R = 11.36+0.95
−0.63 km, each reported as the posterior credible interval bounded by the 16% and 84%
quantiles. This radius favors softer dense matter equations of state and is highly consistent with
constraints derived from gravitational wave measurements of neutron star binary mergers. The hot
regions are inferred to be non-antipodal, and hence inconsistent with a pure centered dipole magnetic
field.
Buy Xanax online now at chemworldstore.netludasams003
Buy Xanax online at chemworldstore.net or buy Alprazolam online.
WhatsApp at +1(732-595-2137) or order at chemworldstore.net, email us at researchchemstore01@gmail.com
Xanax is a benzodiazepine (ben-zoe-dye-AZE-eh-peen). It is thought that alprazolam works by enhancing the activity of certain neurotransmitters in the brain.
Uses
Alprazolam is used to treat anxiety and panic disorders. It belongs to a class of medications called benzodiazepines which act on the brain and nerves (central nervous system) to produce a calming effect. It works by enhancing the effects of a certain natural chemical in the body
Xanax is used to treat anxiety disorders and anxiety caused by depression.
Xanax is also used to treat panic disorders with or without a fear of places and situations that might cause panic, helplessness, or embarrassment (agoraphobia)
Lunar Mobility Drivers and Needs - ArtemisSérgio Sacani
NASA’s new campaign of lunar exploration will see astronauts visiting sites of scientific or strategic
interest across the lunar surface, with a particular focus on the lunar South Pole region.[1] After landing
crew and cargo at these destinations, local mobility around landing sites will be key to movement of
cargo, logistics, science payloads, and more to maximize exploration returns.
NASA’s Moon to Mars Architecture Definition Document (ADD)[2] articulates the work needed to achieve
the agency’s human lunar exploration objectives by decomposing needs into use cases and functions.
Ongoing analysis of lunar exploration needs reveals demands that will drive future concepts and elements.
Recent analysis of integrated surface operations has shown that the transportation of cargo on the
surface from points of delivery to points of use will be particularly important. Exploration systems will
often need to support deployment of cargo in close proximity to other surface infrastructure. This cargo
can range from the crew logistics and consumables described in the 2023 “Lunar Logistics Drivers and
Needs” white paper,[3] to science and technology demonstrations, to large-scale infrastructure that
requires precision relocation.
Modelling, Simulation, and Computer-aided Design in Computational, Evolutiona...University of Maribor
Slides from:
Aleš Zamuda:
Modelling, Simulation, and Computer-aided Design in Computational, Evolutionary, Supercomputing, and Intelligent Systems.
Central European Exchange Program for University Studies (CEEPUS). TU Graz, Austria
OeAD Austria, CEEPUS network ``Modelling, Simulation and Computer-aided Design in Engineering and Management''
Deploying DAPHNE Computational Intelligence on EuroHPC Vega for Benchmarking ...University of Maribor
Slides from talk:
Aleš Zamuda, Mark Dokter:
Deploying DAPHNE Computational Intelligence on EuroHPC Vega for Benchmarking Randomised Optimisation Algorithms.
2024 International Conference on Broadband Communications for Next Generation Networks and Multimedia Applications (CoBCom), 9--11 July 2024, Graz, Austria
https://www.cobcom.tugraz.at/
Phytoremediation: Harnessing Nature's Power with PhytoremediationGurjant Singh
This document provides an overview of phytoremediation, which uses plants to remove contaminants from soil, sediment, or water. It discusses the need for new remediation techniques, describes various phytoremediation processes like phytoextraction and rhizofiltration, and covers important concepts like hyperaccumulators, biotechnology applications, case studies, and advantages/limitations. The author aims to explain the mechanisms, history, types of plants used, and future research directions of this eco-friendly approach to environmental cleanup.
Probing the northern Kaapvaal craton root with mantle-derived xenocrysts from...James AH Campbell
"Probing the northern Kaapvaal craton root with mantle-derived xenocrysts from the Marsfontein orangeite diatreme, South Africa".
N.S. Ngwenya, S. Tappe, K.A. Smart, D.C. Hezel, J.A.H. Campbell, K.S. Viljoen
2024 Trend Updates: What Really Works In SEO & Content MarketingSearch Engine Journal
The future of SEO is trending toward a more human-first and user-centric approach, powered by AI intelligence and collaboration. Are you ready?
Watch as we explore which SEO trends to prioritize to achieve sustainable growth and deliver reliable results. We’ll dive into best practices to adapt your strategy around industry-wide disruptions like SGE, how to navigate the top challenges SEO professionals are facing, and proven tactics for prioritizing quality and building trust.
You’ll hear:
- The top SEO trends to prioritize in 2024 to achieve long-term success.
- Predictions for SGE’s impact, and how to adapt.
- What E-E-A-T really means, and how to implement it holistically (hint: it’s never been more important).
With Zack Kadish and Alex Carchietta, we’ll show you which SEO trends to ignore and which to focus on, along with the solution to overcoming rapid, significant and disruptive Google algorithm updates.
If you’re looking to cut through the noise of constant SEO and content trends to drive success, you won’t want to miss this webinar.
Storytelling For The Web: Integrate Storytelling in your Design ProcessChiara Aliotta
In this slides I explain how I have used storytelling techniques to elevate websites and brands and create memorable user experiences. You can discover practical tips as I showcase the elements of good storytelling and its applied to some examples of diverse brands/projects..
This presentation by Thibault Schrepel, Associate Professor of Law at Vrije Universiteit Amsterdam University, was made during the discussion “Artificial Intelligence, Data and Competition” held at the 143rd meeting of the OECD Competition Committee on 12 June 2024. More papers and presentations on the topic can be found at oe.cd/aicomp.
This presentation was uploaded with the author’s consent.
How to Leverage AI to Boost Employee Wellness - Lydia Di Francesco - SocialHR...SocialHRCamp
Speaker: Lydia Di Francesco
In this workshop, participants will delve into the realm of AI and its profound potential to revolutionize employee wellness initiatives. From stress management to fostering work-life harmony, AI offers a myriad of innovative tools and strategies that can significantly enhance the wellbeing of employees in any organization. Attendees will learn how to effectively leverage AI technologies to cultivate a healthier, happier, and more productive workforce. Whether it's utilizing AI-powered chatbots for mental health support, implementing data analytics to identify internal, systemic risk factors, or deploying personalized wellness apps, this workshop will equip participants with actionable insights and best practices to harness the power of AI for boosting employee wellness. Join us and discover how AI can be a strategic partner towards a culture of wellbeing and resilience in the workplace.
2024 State of Marketing Report – by HubspotMarius Sescu
https://www.hubspot.com/state-of-marketing
· Scaling relationships and proving ROI
· Social media is the place for search, sales, and service
· Authentic influencer partnerships fuel brand growth
· The strongest connections happen via call, click, chat, and camera.
· Time saved with AI leads to more creative work
· Seeking: A single source of truth
· TLDR; Get on social, try AI, and align your systems.
· More human marketing, powered by robots
ChatGPT is a revolutionary addition to the world since its introduction in 2022. A big shift in the sector of information gathering and processing happened because of this chatbot. What is the story of ChatGPT? How is the bot responding to prompts and generating contents? Swipe through these slides prepared by Expeed Software, a web development company regarding the development and technical intricacies of ChatGPT!
Product Design Trends in 2024 | Teenage EngineeringsPixeldarts
The realm of product design is a constantly changing environment where technology and style intersect. Every year introduces fresh challenges and exciting trends that mold the future of this captivating art form. In this piece, we delve into the significant trends set to influence the look and functionality of product design in the year 2024.
How Race, Age and Gender Shape Attitudes Towards Mental HealthThinkNow
Mental health has been in the news quite a bit lately. Dozens of U.S. states are currently suing Meta for contributing to the youth mental health crisis by inserting addictive features into their products, while the U.S. Surgeon General is touring the nation to bring awareness to the growing epidemic of loneliness and isolation. The country has endured periods of low national morale, such as in the 1970s when high inflation and the energy crisis worsened public sentiment following the Vietnam War. The current mood, however, feels different. Gallup recently reported that national mental health is at an all-time low, with few bright spots to lift spirits.
To better understand how Americans are feeling and their attitudes towards mental health in general, ThinkNow conducted a nationally representative quantitative survey of 1,500 respondents and found some interesting differences among ethnic, age and gender groups.
Technology
For example, 52% agree that technology and social media have a negative impact on mental health, but when broken out by race, 61% of Whites felt technology had a negative effect, and only 48% of Hispanics thought it did.
While technology has helped us keep in touch with friends and family in faraway places, it appears to have degraded our ability to connect in person. Staying connected online is a double-edged sword since the same news feed that brings us pictures of the grandkids and fluffy kittens also feeds us news about the wars in Israel and Ukraine, the dysfunction in Washington, the latest mass shooting and the climate crisis.
Hispanics may have a built-in defense against the isolation technology breeds, owing to their large, multigenerational households, strong social support systems, and tendency to use social media to stay connected with relatives abroad.
Age and Gender
When asked how individuals rate their mental health, men rate it higher than women by 11 percentage points, and Baby Boomers rank it highest at 83%, saying it’s good or excellent vs. 57% of Gen Z saying the same.
Gen Z spends the most amount of time on social media, so the notion that social media negatively affects mental health appears to be correlated. Unfortunately, Gen Z is also the generation that’s least comfortable discussing mental health concerns with healthcare professionals. Only 40% of them state they’re comfortable discussing their issues with a professional compared to 60% of Millennials and 65% of Boomers.
Race Affects Attitudes
As seen in previous research conducted by ThinkNow, Asian Americans lag other groups when it comes to awareness of mental health issues. Twenty-four percent of Asian Americans believe that having a mental health issue is a sign of weakness compared to the 16% average for all groups. Asians are also considerably less likely to be aware of mental health services in their communities (42% vs. 55%) and most likely to seek out information on social media (51% vs. 35%).
AI Trends in Creative Operations 2024 by Artwork Flow.pdfmarketingartwork
Creative operations teams expect increased AI use in 2024. Currently, over half of tasks are not AI-enabled, but this is expected to decrease in the coming year. ChatGPT is the most popular AI tool currently. Business leaders are more actively exploring AI benefits than individual contributors. Most respondents do not believe AI will impact workforce size in 2024. However, some inhibitions still exist around AI accuracy and lack of understanding. Creatives primarily want to use AI to save time on mundane tasks and boost productivity.
Organizational culture includes values, norms, systems, symbols, language, assumptions, beliefs, and habits that influence employee behaviors and how people interpret those behaviors. It is important because culture can help or hinder a company's success. Some key aspects of Netflix's culture that help it achieve results include hiring smartly so every position has stars, focusing on attitude over just aptitude, and having a strict policy against peacocks, whiners, and jerks.
PEPSICO Presentation to CAGNY Conference Feb 2024Neil Kimberley
PepsiCo provided a safe harbor statement noting that any forward-looking statements are based on currently available information and are subject to risks and uncertainties. It also provided information on non-GAAP measures and directing readers to its website for disclosure and reconciliation. The document then discussed PepsiCo's business overview, including that it is a global beverage and convenient food company with iconic brands, $91 billion in net revenue in 2023, and nearly $14 billion in core operating profit. It operates through a divisional structure with a focus on local consumers.
Content Methodology: A Best Practices Report (Webinar)contently
This document provides an overview of content methodology best practices. It defines content methodology as establishing objectives, KPIs, and a culture of continuous learning and iteration. An effective methodology focuses on connecting with audiences, creating optimal content, and optimizing processes. It also discusses why a methodology is needed due to the competitive landscape, proliferation of channels, and opportunities for improvement. Components of an effective methodology include defining objectives and KPIs, audience analysis, identifying opportunities, and evaluating resources. The document concludes with recommendations around creating a content plan, testing and optimizing content over 90 days.
How to Prepare For a Successful Job Search for 2024Albert Qian
The document provides guidance on preparing a job search for 2024. It discusses the state of the job market, focusing on growth in AI and healthcare but also continued layoffs. It recommends figuring out what you want to do by researching interests and skills, then conducting informational interviews. The job search should involve building a personal brand on LinkedIn, actively applying to jobs, tailoring resumes and interviews, maintaining job hunting as a habit, and continuing self-improvement. Once hired, the document advises setting new goals and keeping skills and networking active in case of future opportunities.
A report by thenetworkone and Kurio.
The contributing experts and agencies are (in an alphabetical order): Sylwia Rytel, Social Media Supervisor, 180heartbeats + JUNG v MATT (PL), Sharlene Jenner, Vice President - Director of Engagement Strategy, Abelson Taylor (USA), Alex Casanovas, Digital Director, Atrevia (ES), Dora Beilin, Senior Social Strategist, Barrett Hoffher (USA), Min Seo, Campaign Director, Brand New Agency (KR), Deshé M. Gully, Associate Strategist, Day One Agency (USA), Francesca Trevisan, Strategist, Different (IT), Trevor Crossman, CX and Digital Transformation Director; Olivia Hussey, Strategic Planner; Simi Srinarula, Social Media Manager, The Hallway (AUS), James Hebbert, Managing Director, Hylink (CN / UK), Mundy Álvarez, Planning Director; Pedro Rojas, Social Media Manager; Pancho González, CCO, Inbrax (CH), Oana Oprea, Head of Digital Planning, Jam Session Agency (RO), Amy Bottrill, Social Account Director, Launch (UK), Gaby Arriaga, Founder, Leonardo1452 (MX), Shantesh S Row, Creative Director, Liwa (UAE), Rajesh Mehta, Chief Strategy Officer; Dhruv Gaur, Digital Planning Lead; Leonie Mergulhao, Account Supervisor - Social Media & PR, Medulla (IN), Aurelija Plioplytė, Head of Digital & Social, Not Perfect (LI), Daiana Khaidargaliyeva, Account Manager, Osaka Labs (UK / USA), Stefanie Söhnchen, Vice President Digital, PIABO Communications (DE), Elisabeth Winiartati, Managing Consultant, Head of Global Integrated Communications; Lydia Aprina, Account Manager, Integrated Marketing and Communications; Nita Prabowo, Account Manager, Integrated Marketing and Communications; Okhi, Web Developer, PNTR Group (ID), Kei Obusan, Insights Director; Daffi Ranandi, Insights Manager, Radarr (SG), Gautam Reghunath, Co-founder & CEO, Talented (IN), Donagh Humphreys, Head of Social and Digital Innovation, THINKHOUSE (IRE), Sarah Yim, Strategy Director, Zulu Alpha Kilo (CA).
Trends In Paid Search: Navigating The Digital Landscape In 2024Search Engine Journal
The search marketing landscape is evolving rapidly with new technologies, and professionals, like you, rely on innovative paid search strategies to meet changing demands.
It’s important that you’re ready to implement new strategies in 2024.
Check this out and learn the top trends in paid search advertising that are expected to gain traction, so you can drive higher ROI more efficiently in 2024.
You’ll learn:
- The latest trends in AI and automation, and what this means for an evolving paid search ecosystem.
- New developments in privacy and data regulation.
- Emerging ad formats that are expected to make an impact next year.
Watch Sreekant Lanka from iQuanti and Irina Klein from OneMain Financial as they dive into the future of paid search and explore the trends, strategies, and technologies that will shape the search marketing landscape.
If you’re looking to assess your paid search strategy and design an industry-aligned plan for 2024, then this webinar is for you.
5 Public speaking tips from TED - Visualized summarySpeakerHub
From their humble beginnings in 1984, TED has grown into the world’s most powerful amplifier for speakers and thought-leaders to share their ideas. They have over 2,400 filmed talks (not including the 30,000+ TEDx videos) freely available online, and have hosted over 17,500 events around the world.
With over one billion views in a year, it’s no wonder that so many speakers are looking to TED for ideas on how to share their message more effectively.
The article “5 Public-Speaking Tips TED Gives Its Speakers”, by Carmine Gallo for Forbes, gives speakers five practical ways to connect with their audience, and effectively share their ideas on stage.
Whether you are gearing up to get on a TED stage yourself, or just want to master the skills that so many of their speakers possess, these tips and quotes from Chris Anderson, the TED Talks Curator, will encourage you to make the most impactful impression on your audience.
See the full article and more summaries like this on SpeakerHub here: https://speakerhub.com/blog/5-presentation-tips-ted-gives-its-speakers
See the original article on Forbes here:
http://www.forbes.com/forbes/welcome/?toURL=http://www.forbes.com/sites/carminegallo/2016/05/06/5-public-speaking-tips-ted-gives-its-speakers/&refURL=&referrer=#5c07a8221d9b
ChatGPT and the Future of Work - Clark Boyd Clark Boyd
Everyone is in agreement that ChatGPT (and other generative AI tools) will shape the future of work. Yet there is little consensus on exactly how, when, and to what extent this technology will change our world.
Businesses that extract maximum value from ChatGPT will use it as a collaborative tool for everything from brainstorming to technical maintenance.
For individuals, now is the time to pinpoint the skills the future professional will need to thrive in the AI age.
Check out this presentation to understand what ChatGPT is, how it will shape the future of work, and how you can prepare to take advantage.
The document provides career advice for getting into the tech field, including:
- Doing projects and internships in college to build a portfolio.
- Learning about different roles and technologies through industry research.
- Contributing to open source projects to build experience and network.
- Developing a personal brand through a website and social media presence.
- Networking through events, communities, and finding a mentor.
- Practicing interviews through mock interviews and whiteboarding coding questions.
Google's Just Not That Into You: Understanding Core Updates & Search IntentLily Ray
1. Core updates from Google periodically change how its algorithms assess and rank websites and pages. This can impact rankings through shifts in user intent, site quality issues being caught up to, world events influencing queries, and overhauls to search like the E-A-T framework.
2. There are many possible user intents beyond just transactional, navigational and informational. Identifying intent shifts is important during core updates. Sites may need to optimize for new intents through different content types and sections.
3. Responding effectively to core updates requires analyzing "before and after" data to understand changes, identifying new intents or page types, and ensuring content matches appropriate intents across video, images, knowledge graphs and more.
Evaluating Haptic and Auditory Guidance to Assist Blind People in Reading Printed Text Using Finger-Mounted Cameras
1. Lee Stearns1, Ruofei Du1, Uran Oh1, Catherine Jou1, Leah Findlater2, David A. Ross3, Jon E. Froehlich1
University of Maryland: Computer Science1, Information Studies2, Atlanta VA R&D Center for Visual & Neurocognitive Rehabilitation3
Evaluating Haptic and Auditory Guidance to
Assist Blind People in Reading Printed Text
Using Finger-Mounted Cameras
TACCESS | ASSETS 2016
2. What if printed text could be accessed
through touch in the same way as braille?
*Video Credit: YouTube—Ginny Owens—How I See It (Reading Braille)
3. What if printed text could be accessed
through touch in the same way as braille?
4. What if printed text could be accessed
through touch in the same way as braille?
Reading printed materials is still an important but
challenging task for people with visual impairments
11. Open Questions (Existing Devices)
1. How to assist with aiming the camera
to capture desired content?
12. Open Questions (Existing Devices)
1. How to assist with aiming the camera
to capture desired content?
2. How to handle complex documents
and convey layout information?
16. HANDSIGHT
Tiny CMOS cameras,
haptic actuators mounted
on one or more fingers
Smartwatch for power,
processing, speech
and audio output
A vision-augmented touch system
17. HANDSIGHT
Tiny CMOS cameras,
haptic actuators mounted
on one or more fingers
Smartwatch for power,
processing, speech
and audio output
A vision-augmented touch system * Originally proposed in Stearns et al. 2014
23. Advantages of Finger-Based Reading
1. Does not require framing an overhead camera
2. Allows direct access to spatial information
24. Advantages of Finger-Based Reading
1. Does not require framing an overhead camera
2. Allows direct access to spatial information
3. Provides better control over pace and rereading
25. Advantages of Finger-Based Reading
1. Does not require framing an overhead camera
2. Allows direct access to spatial information
3. Provides better control over pace and rereading
26. Advantages of Finger-Based Reading
1. Does not require framing an overhead camera
2. Allows direct access to spatial information
3. Provides better control over pace and rereading
New Challenges
1. How to precisely trace a line of text?
2. How to support physical navigation?
27. COMPARING TWO TYPES OF
DIRECTIONAL FINGER GUIDANCE
2. Audio via built-in
or external speakers
1. Finger-mounted
vibration motors
31. 2. Audio via built-in
or external speakers
COMPARING TWO TYPES OF
DIRECTIONAL FINGER GUIDANCE
32. Higher pitch: move up
2. Audio via built-in
or external speakers
COMPARING TWO TYPES OF
DIRECTIONAL FINGER GUIDANCE
33. Lower pitch: move down
2. Audio via built-in
or external speakers
COMPARING TWO TYPES OF
DIRECTIONAL FINGER GUIDANCE
34. Research Questions
1. To what extent are finger-based cameras a viable
accessibility solution for reading printed text?
2. What design choices can improve this viability?
35. Study Overview
Study I: initial iPad study (19 participants) Study II: physical prototype study (4 participants)
36. Study I: initial iPad study (19 participants) Study II: physical prototype study (4 participants)
Study Overview
37. Study I: initial iPad study (19 participants)
Study Overview
Study I: initial iPad study (19 participants)
Goals:
Compare audio/haptic
Explore & interpret spatial layouts
Assess reading and comprehension
Study Overview
38. Study I Method
Used an iPad to focus on user experience, gather finger trace data
39. Study I Method
Used an iPad to focus on user experience, gather finger trace data
19 participants
Median Age 48 (SD=12, Range=26-67)
Gender 11 Male, 8 Female
Vision Level 10 Totally Blind, 9 Light Sensitive
40. Study I Method
Used an iPad to focus on user experience, gather finger trace data
19 participants
Within-subjects, two guidance
conditions: audio and haptic
Haptic vibrations
Audio pitch
41. Study I Method
Used an iPad to focus on user experience, gather finger trace data
19 participants
Within-subjects, two guidance conditions: audio and haptic
Participants read two documents for each condition
plain magazine
42. Study I Method
Used an iPad to focus on user experience,
gather finger trace data
19 participants
Within-subject, two guidance
conditions: audio and haptic
Participants read two documents for each condition
Analysis: reading speed and accuracy,
comprehension, subjective feedback
audio
haptic
44. Continuous audio feedback to identify content beneath finger
Flute sound: text
Cello sound: picture
Silence: empty space
Same across both conditions
System Design: Exploration Mode
46. Bimanual: right index finger to read, left to anchor start of line
Directional guidance: audio or haptic depending on condition
Used to stay on the line or find the start of the next line
Audio: pitch of continuous audio
Haptic: strength and position of vibration
Additional audio cues (same for both conditions)
Start/end of line or paragraph
Synthesized speech
System Design: Reading Mode
47. Above the line: downward guidance
(low pitch or lower vibration motor)
Below the line: upward guidance
(high pitch or upper vibration motor)
Start/end of line or paragraph
(short but distinctive audio cues)
49. Study I Findings
Haptic vs. Audio: Quantitative Performance
Line tracing / magazine documents: Audio significantly more accurate (p = 0.018)
50. Study I Findings
Haptic vs. Audio: Quantitative Performance
Line tracing / magazine documents: Audio significantly more accurate (p = 0.018)
audio haptic
Example finger traces—Dashed red lines mark drift off of the line
51. Study I Findings
Haptic vs. Audio: Quantitative Performance
Line tracing / magazine documents: Audio significantly more accurate (p = 0.018)
Comprehension high, no significant differences between conditions
audio haptic
Example finger traces—Dashed red lines mark drift off of the line
52. Study I Findings
Haptic vs. Audio: Subjective Preference
Preferences split (11 haptic, 7 audio, 1 equal preference)
53. Study I Findings
Haptic vs. Audio: Subjective Preference
Preferences split (11 haptic, 7 audio, 1 equal preference)
Preferred Haptic
More intuitive
Easier to use
Faster
Less distracting
54. Study I Findings
Haptic vs. Audio: Subjective Preference
Preferences split (11 haptic, 7 audio, 1 equal preference)
Preferred Haptic Preferred Audio
More intuitive
Easier to use
Faster
Less distracting
Less confusing
More comfortable
No desensitization
55. Study I Findings
Haptic vs. Audio: Subjective Preference
Preferences split (11 haptic, 7 audio, 1 equal preference)
Preferred Haptic Preferred Audio
More intuitive
Easier to use
Faster
Less distracting
Less confusing
More comfortable
No desensitization
Reflects contradictory findings in Stearns et al. 2014, Shilkrot et al. 2014, 2015
56. Study I Findings
Overall Reading Experience
Pros
Low learning curve
Flexible
Direct control over speed
57. Study I Findings
Overall Reading Experience
Pros Cons
Low learning curve
Flexible
Direct control over speed
Hard to use for reading
High cognitive load may
affect comprehension
58. Study I Findings
Exploration Mode
Participants appreciated direct access to spatial information, and
nearly all able to locate images and count the number of columns.
59. Study I: initial iPad study (19 participants) Study II: physical prototype study (4 participants)
Study Overview
60. Study I: initial iPad study (19 participants) Study II: physical prototype study (4 participants)
Study Overview
65. Study II Findings
HandSight: Overall Experience
Average reading speed: 45 wpm (SD=19, Range=18-60)
Rated somewhat easy to use, but slow and required concentration
66. Study II Findings
HandSight: Overall Experience
Average reading speed: 45 wpm (SD=19, Range=18-60)
Rated somewhat easy to use, but slow and required concentration
Participant Quotes:
“I’m very pleased and excited about the system. I think it could make a
great difference in my life.” (P19)
“It seems like a lot of effort for reading text.” (P12)
67. Study II Findings
HandSight vs. KNFB Reader iOS
Participants unanimously preferred KNFB Reader iOS
HandSight KNFB Reader iOS
68. Study II Findings
HandSight vs. KNFB Reader iOS
Participants unanimously preferred KNFB Reader iOS
Faster, easier to concentrate on the content of the text
HandSight KNFB Reader iOS
74. Implications
Pros
Spatial layout information
Direct control over reading
Reduced camera framing issues
Efficient text detection and recognition
* We observed these in our studies
Feasibility of a Finger-Based Reading Approach
75. Implications
Feasibility of a Finger-Based Reading Approach
Pros Cons
Spatial layout information
Direct control over reading
Reduced camera framing issues
Efficient text detection and recognition
* We observed these in our studies
Slower, requires increased
concentration and physical dexterity
76. Implications
Feasibility of a Finger-Based Reading Approach
Pros Cons
Spatial layout information
Direct control over reading
Reduced camera framing issues
Efficient text detection and recognition
* We observed these in our studies
Slower, requires increased
concentration and physical dexterity
* Consistent with Shilkrot et al. 2014, 2015
77. Implications
Feasibility of a Finger-Based Reading Approach
Pros Cons
Spatial layout information
Direct control over reading
Reduced camera framing issues
Efficient text detection and recognition
* We observed these in our studies
Slower, requires increased
concentration and physical dexterity
* Consistent with Shilkrot et al. 2014, 2015
Importance of spatial layout
information is unclear
78. Implications
Feasibility of a Finger-Based Reading Approach
Pros Cons
Spatial layout information
Direct control over reading
Reduced camera framing issues
Efficient text detection and recognition
* We observed these in our studies
Slower, requires increased
concentration and physical dexterity
* Consistent with Shilkrot et al. 2014, 2015
Importance of spatial layout
information is unclear
* Has yet to be investigated in this context
79. Future Work
Study utility of spatial layout information in everyday use
(e.g., newspapers, menus, maps, graphs)
82. Questions?
Contact: lstearns@umd.edu
Thank you to our participants and the Maryland State Library for the Blind and Physically Handicapped.
This research was funded by the Department of Defense.
Lee Stearns1, Ruofei Du1, Uran Oh1, Catherine Jou1, Leah Findlater2, David A. Ross3, Jon E. Froehlich1
University of Maryland: Computer Science1, Information Studies2, Atlanta VA R&D Center for Visual & Neurocognitive Rehabilitation3
Evaluating Haptic and Auditory Guidance to
Assist Blind People in Reading Printed Text
Using Finger-Mounted Cameras
TACCESS | ASSETS 2016
Editor's Notes
Hello everyone! My name is Lee Stearns and I’m from the University of Maryland. Today I’m going to be presenting my group’s research, titled “Evaluating Haptic and Auditory Guidance to Assist Blind People in Reading Printed Text Using Finger-Mounted Cameras”.
We began this project with a question: what if printed text could be accessed through touch in the same way as braille? Touch-based reading is a highly tactile experience that enables easy exploration of spatial layouts and direct control over reading speed or order.
But relatively little material is available in Braille, and even with the increasing availability of digital information and screen reader software, reading printed text materials is still an important but challenging task for people with visual impairments.
But relatively little material is available in Braille, and even with the increasing availability of digital information and screen reader software, reading printed text materials is still an important but challenging task for people with visual impairments.
We looked at several existing devices that offer reading solutions,
from stationary scanners and screen reader software [next slide]
to dedicated devices such as desktop video magnifiers, [next slide]
to mobile phone applications like KNFB Reader [next slide]
and more recently to wearable cameras such as OrCam.
from stationary scanners and screen reader software [next slide]
to dedicated devices such as desktop video magnifiers, [next slide]
to mobile phone applications like KNFB Reader [next slide]
and more recently to wearable cameras such as OrCam.
from stationary scanners and screen reader software [next slide]
to dedicated devices such as desktop video magnifiers, [next slide]
to mobile phone applications like KNFB Reader [next slide]
and more recently to wearable cameras such as OrCam.
from stationary scanners and screen reader software [next slide]
to dedicated devices such as desktop video magnifiers, [next slide]
to mobile phone applications like KNFB Reader [next slide]
and more recently to wearable cameras such as OrCam.
These devices have gone a long way toward improving the accessibility of printed materials, but several open questions still remain.
One important question for handheld or wearable devices is how to help with aiming the camera to capture the target content. For example, the KNFB Reader app and OrCam both provide spoken field of view reports to assist with aiming, but the process can still be challenging for blind readers.
Another question is how to interpret and communicate documents with complex layouts such as newspapers or menus. Even with digital content it can be difficult to decide the order to read different blocks of text, and what layout details to convey.
As an alternative to these solutions, we envision a system called HandSight that’s made up of tiny cameras [next slide] and haptics [next slide] integrated directly with the user’s fingers. It would be connected to a smartwatch [next slide] for power, processing, and speech or audio output. [next slide]
We first proposed this idea as a system that augments the sense of touch for several different applications, but our focus so far has primarily been on reading printed text.
As an alternative to these solutions, we envision a system called HandSight that’s made up of tiny cameras [next slide] and haptics [next slide] integrated directly with the user’s fingers. It would be connected to a smartwatch [next slide] for power, processing, and speech or audio output. [next slide]
We first proposed this idea as a system that augments the sense of touch for several different applications, but our focus so far has primarily been on reading printed text.
As an alternative to these solutions, we envision a system called HandSight that’s made up of tiny cameras [next slide] and haptics [next slide] integrated directly with the user’s fingers. It would be connected to a smartwatch [next slide] for power, processing, and speech or audio output. [next slide]
We first proposed this idea as a system that augments the sense of touch for several different applications, but our focus so far has primarily been on reading printed text.
As an alternative to these solutions, we envision a system called HandSight that’s made up of tiny cameras [next slide] and haptics [next slide] integrated directly with the user’s fingers. It would be connected to a smartwatch [next slide] for power, processing, and speech or audio output. [next slide]
We first proposed this idea as a system that augments the sense of touch for several different applications, but our focus so far has primarily been on reading printed text.
As an alternative to these solutions, we envision a system called HandSight that’s made up of tiny cameras [next slide] and haptics [next slide] integrated directly with the user’s fingers. It would be connected to a smartwatch [next slide] for power, processing, and speech or audio output. [next slide]
We first proposed this idea as a system that augments the sense of touch for several different applications, but our focus so far has primarily been on reading printed text.
The idea of augmenting the user’s finger has become more and more common in recent years. There’s a survey paper called Digital Digits that gives a good overview, but today I’m only going to mention two projects that are particularly relevant to our research.
The first is Magic Finger, which combined a small camera with an optical mouse sensor at the tip of the user’s finger to recognize gestures on any surface.
And more specifically related to our goal of reading printed materials is FingerReader, which explored using a ring-based camera with haptic and audio feedback. It allowed blind readers to trace their finger over printed text and hear speech output in real-time.
Our own previous research explored a similar solution. We used a ring design with the same tiny camera as Magic Finger, along with a pair of vibration motors to provide haptic feedback.
Reading by touch has several potential advantages. It enables direct access—you place your finger on the page, and you can immediately start getting feedback and exploring the document. You don’t need to worry about lining up the document with the camera or waiting for it to be scanned and processed.
It may also provide a more intuitive way for users to explore a document’s layout, which we hope could improve comprehension and document understanding, especially for more complex documents.
And since the system reads words only when the finger moves across them, users have better control over pace and rereading.
But a finger-based approach also introduces some new challenges that haven’t been fully investigated.
But a finger-based approach also introduces some new challenges that haven’t been fully investigated.
Because the field of view from a finger-mounted camera is limited, the reader has to precisely trace their finger along the current line of text so that the image doesn’t get cut off or distorted. We’ll also need to assist users in physical navigation through the document, such as finding the start of a passage and moving from one line to the next.
To overcome these two challenges we’ll need to be able to effectively guide the user’s finger.
We compared two types of guidance to see which would better enable blind readers to trace lines of text and read a document: haptic or audio.
First was the haptic directional guidance, which used a pair of vibration motors mounted on the top and bottom of the user’s finger. These conveyed direction and distance by the position and intensity of the vibration.
The upper motor directed participants to move upwards,
and the lower motor directed them downwards.
Second was audio directional guidance, which used a continuous tone with a variable pitch to convey direction and distance.
In this case a higher pitch directed participants upward
and a lower pitch directed them downward.
Our research was the first to take a detailed look at the directional guidance and general usability of reading with a finger-mounted camera quantitatively as well as subjectively.
The main questions driving our research were to learn to what extent finger-based cameras are a viable accessibility solution for reading printed text? And also what design choices could improve that viability?
We conducted two studies to further investigate this idea. The first was a controlled lab experiment with 19 participants, which looked at the user experience of reading through touch and specifically compared the two types of directional finger guidance. The second was a smaller follow-up study with 4 participants, which collected preliminary feedback about our wearable prototype.
The primary goal of the first study was to compare audio and haptic directional finger guidance in terms of user performance and preference. But we also explored the extent that our approach would allow a blind reader to interpret the spatial layout of a document and to read and understand that document.
The primary goal of the first study was to compare audio and haptic directional finger guidance in terms of user performance and preference. But we also explored the extent that our approach would allow a blind reader to interpret the spatial layout of a document and to read and understand that document.
We used an iPad covered with a thin sheet of paper to simulate the experience of reading a physical document. That let us bypass some of the technical challenges and instead focus just on the user experience. It also allowed us to collect precise finger trace data to perform detailed analysis of finger movements.
We recruited 19 participants, all of whom were totally blind or with only minimal light perception.
We used a within-subjects experiment design with 2 directional guidance conditions: audio and haptic. The conditions were identical aside from the type of directional guidance that was provided while the participant was tracing a line of text or searching for the start of the next line.
We asked each participant to read 2 documents for each condition, a plain text document and a magazine-style document that had two columns, a title, and an image.
We collected statistics on the participants’ reading speed and accuracy, their comprehension of each document, and subjective feedback about the ease of use. We used all of that information to compare the two guidance conditions and to evaluate the overall usability of our interface.
Our system had 2 modes: exploration mode and reading mode. For each document, participants first explored to learn its layout and find the first line of text, and then read sequentially through the selected passage.
In the exploration mode, the participant could move around the page however they liked, and the system provided continuous audio feedback to identify the content that was underneath their finger. We chose distinctive high and low pitched sounds to represent text or pictures, and silence to represent empty space. The exploration mode was the same across both conditions.
I’m now going to play a video demo of that interface.
[play video]
In the reading mode, participants moved sequentially along each line and the system read each word out loud as their finger passed over it. They could use their right index finger to read and their left as an anchor to help them find the start of the next line. We provided vertical directional guidance to help stay on a line or to locate the start of a new line.
For the audio condition, continuous audio cues played when the participant’s finger drifted off of the line, and the pitch changed depending the direction and distance to the line. The haptic condition was very similar, but it used the location and intensity of the vibrations in place of the audio pitch. We also played distinctive audio cues to identify the start and end of each line or paragraph, as well as the speech feedback.
Here’s a demo of the reading interface.
[play video]
Our findings highlighted tradeoffs between the two guidance conditions.
The speed and accuracy of the two types of guidance were very similar for both line finding and line tracing, but audio may offer some advantages to accuracy since it was significantly better than haptics while reading the magazine documents.
These images show some example finger traces from a single participant. The solid green lines show when the participant was accurately following the line, and dashed red lines show where they were off the line and receiving directional guidance. Audio guidance tended to prompt a more immediate correction than haptic.
Comprehension was high across all of the documents, with no significant differences between the two conditions.
The participants’ preferences were split, although a majority preferred haptic guidance.
The participants who preferred haptic felt that it was more intuitive, easier to use, or faster than audio. Some also found the audio guidance more distracting, and said that it made it harder to focus on the speech feedback.
On the other hand, almost all of those who preferred audio guidance found it to be less confusing than haptic. Several participants had trouble remembering which direction was indicated by which motor. A few felt that the mapping we used was the reverse of how it should be, and it’s possible that given the option to customize it more would have preferred the haptic guidance.
Two participants were also concerned with the comfort level of the haptic version and the potential for desensitization over time.
This split in user preferences reflects the contradictory findings in reported previous studies.
In general, several participants liked the lower learning curve and flexibility of our system compared to other reading approaches, and especially the ability to directly control the reading speed.
However, others raised concerns about ease of use and cognitive load, which could affect comprehension and retention.
Finally, a few participants mentioned that they liked the knowledge about the document’s layout that the exploration mode provided, and nearly all were able to use it to locate images and count the number of columns in a document.
We conducted a follow-up study with 4 participants from Study I who returned to test our proof-of-concept wearable prototype
The first study was primarily focused on the user experience, and the iPad let us performed detailed analysis of the participants finger movements. For this second study, we implemented the interfaces from our iPad version as a fully functioning physical prototype, which we called HandSight. We asked participants to try the system and provide subjective feedback, and also compared it against the KNFB Reader iPhone app, which was the current state of the art for reading printed documents.
The first study was primarily focused on the user experience, and the iPad let us performed detailed analysis of the participants finger movements. For this second study, we implemented the interfaces from our iPad version as a fully functioning physical prototype, which we called HandSight. We asked participants to try the system and provide subjective feedback, and also compared it against the KNFB Reader iPhone app, which was the current state of the art for reading printed documents.
Unfortunately I don’t have time to talk much about our prototype system, but all of the details are available in our paper. We used the same haptic and audio cues as in the first study and the process for exploring and reading a document was very similar from the participants’ perspective.
Participants each used their preferred directional guidance type from the first study to explore and read two documents using our prototype system. The documents had a single column and were approximately the same length and reading level.
Afterward, participants used the KNFB Reader iPhone app to photograph and read 3 more documents. Two of these were similar to those read with our prototype, but the third had two columns since KNFB Reader supports multi-column documents.
All four participants completed the reading tasks with our prototype, but their performance and subjective feedback was mixed.
The average reading speed was 45 words per minute, but the speed varied quite a bit. The participants all expressed concern over the level of concentration that was required to interpret the directional guidance and other audio cues while listening to the speech feedback.
Two quotes in particular highlight the difference in opinions. One participant was enthusiastic about the concept, saying “I’m very pleased and excited about the system. I think it could make a great difference in my life.” Another was more critical, finding the approach to be slower than expected: “It seems like a lot of effort for reading text.”
I want to stress that our experiment with the KNFB Reader app was not a controlled comparison. But even without that and without using all of the app’s features, participants unanimously preferred KNFB Reader for the types of documents we asked them read.
The main reason for that was how fluid the reading experience was after the document had been captured, since the app read the document quickly and participants were able to concentrate fully on the content.
This ties into our original questions about the feasibility of a touch-based approach for reading printed text. We expected that it would have many advantages. [next slide] These included information about the spatial layout, [next slide] direct access to the text on the page and control over the reading speed, [next slide] easier camera framing, [next slide] and the potential for more efficient text detection and recognition.
This ties into our original questions about the feasibility of a touch-based approach for reading printed text. We expected that it would have many advantages. [next slide] These included information about the spatial layout, [next slide] direct access to the text on the page and control over the reading speed, [next slide] easier camera framing, [next slide] and the potential for more efficient text detection and recognition.
This ties into our original questions about the feasibility of a touch-based approach for reading printed text. We expected that it would have many advantages. [next slide] These included information about the spatial layout, [next slide] direct access to the text on the page and control over the reading speed, [next slide] easier camera framing, [next slide] and the potential for more efficient text detection and recognition.
This ties into our original questions about the feasibility of a touch-based approach for reading printed text. We expected that it would have many advantages. [next slide] These included information about the spatial layout, [next slide] direct access to the text on the page and control over the reading speed, [next slide] easier camera framing, [next slide] and the potential for more efficient text detection and recognition.
This ties into our original questions about the feasibility of a touch-based approach for reading printed text. We expected that it would have many advantages. [next slide] These included information about the spatial layout, [next slide] direct access to the text on the page and control over the reading speed, [next slide] easier camera framing, [next slide] and the potential for more efficient text detection and recognition.
We did observe these advantages in our experiments, but some important concerns also came up.
Our studies showed that reading text line-by-line can be slow and require high mental and physical effort. The reader has to pay attention to multiple haptic or audio cues along with the speech feedback.
This finding is consistent with previous work in this area, and is one of the biggest limitations of our approach. Practice and improvements to the guidance interface might help, but we’d need to do a longer study over multiple sessions to see how much.
We also don’t know how important spatial layout information will be in general. For relatively simple documents like the ones we used in our studies, it might not have much benefit. But for documents with important spatial information, such as maps and graphs, an approach like ours could become much more useful.
To our knowledge, no one has investigated this question as of yet, at least not in the context of touch-based access to physical documents.
Toward that end, we plan to run a more in-depth study in the near future with these types of documents to see how useful spatial layout information might be in everyday use.
We’ve also considered the idea that a hybrid system might be a good compromise. It could use a body-mounted camera to determine context and read back longer passages, but then use the finger-mounted camera for immediate access and exploration.
Finally, reading is only one of the applications we hope to support, and our goal is to produce a wearable device that augments the sense of touch and helps with a variety of activities of daily living.
That’s it for my presentation. Thank you to everyone on my team, and to all of you attending today.
Previous research has used directional guidance methods similar to these, but in each case the focus was on feasibility. The user studies had small sample sizes of 3 to 4 participants, and didn’t report on quantitative performance metrics. That prevents us from fully understanding the effectiveness of finger guidance, as well as the reading performance and user reactions. Participants in the previous studies also had contradictory preferences when comparing the different types of directional guidance.