Electronic sketching has received a recurrence of interest over the years and again nowadays within the mobile web context, where there are diverse devices, operating systems and browsers to be considered. Multi-platform (e.g. web-based) sketching systems can be constructed to allow users to sketch on their device of preference. However, web applications do not always perform equally on all devices, and this is a critical issue, especially for applications that require instant visual feedback such as sketch-based systems. This paper describes
a user study conducted to identify the most appropriate response rates (expressed in frames per second) for end users while sketching. The results are expected to guide stakeholders
in defining response parameters for sketching applications on the web by showing intervals that are accepted, tolerated, and rejected by end users.
This document describes how PowerPoint can be used as a tool for website usability testing and analysis. It details three case studies using PowerPoint to conduct card sorting, create an interactive prototype, and perform user testing of a college website redesign. For card sorting, PowerPoint was used to email content pages to participants and analyze results. An interactive prototype was built to test navigation scenarios. User testing measured completion times, errors and satisfaction ratings. Findings revealed navigation issues and provided insights not found with paper prototypes. The document argues PowerPoint is a viable hybrid solution that is low-cost, easy to use and allows for remote testing.
This document discusses usability engineering and provides an overview of key concepts in the field. It defines usability and discusses the usability engineering lifecycle, which includes understanding users, prototyping, testing interfaces, and iterative design. Methods like heuristic evaluation, usability testing, and internationalization considerations are also covered. The document concludes by discussing potential future developments in usability like increased natural language and adaptive interfaces.
Executing for Every Screen: Build, launch and sustain products for your custo...Steven Hoober
The document discusses principles and best practices for designing products and interfaces that work across multiple screens and platforms. It emphasizes starting with principles, designing for user needs rather than specific platforms, building shared features and services first before customizing interfaces, and continuously evolving products based on data and user feedback.
EyeGrip: Detecting Targets in a Series of Uni-directional Moving Objects Usin...Diako Mardanbegi
EyeGrip proposes a novel and yet simple technique of analysing eye movements for automatically detecting the users objects of interest in a sequence of visual stimuli mov- ing horizontally or vertically in front of the user’s view. We assess the viability of this technique in a scenario where the user looks at a sequence of images moving horizontally on the display while the user’s eye movements are tracked by an eye tracker. We conducted an experiment that shows the performance of the proposed approach. We also investigated the influence of the speed and maximum number of visible images in the screen, on the accuracy of EyeGrip. Based on the experiment results, we propose guidelines for designing EyeGrip-based interfaces. EyeGrip can be considered as an implicit gaze interaction technique with potential use in broad range of applications such as large screens, mobile devices and eyewear computers. In this paper, we demonstrate the rich capabilities of EyeGrip with two example applications: 1) a mind reading game, and 2) a picture selection system. Our study shows that by selecting an appropriate speed and maximum number of visible images in the screen the pro- posed method can be used in a fast scrolling task where the system accurately (87%) detects the moving images that are visually appealing to the user, stops the scrolling and brings the item(s) of interest back to the screen.
This document summarizes how video is tested at Skype. It discusses the video team structure, development processes, types of testing performed, and tools used. The video team develops video functionality and releases a new version every two months. Testing includes manual and automatic testing, as well as functional, non-functional, integration, and performance testing. Logs are analyzed and an automatic calling system is used to run thousands of calls daily across platforms.
This talk, given at the VA Smalltalk Forum Europe 2010 in Stuttgart, gives an overview of techniques and tools to get existing Smalltalk projects back to speed and productivity.
The talk included some demos of tools we created for some of our customers to make their project life much easier.
Slides for Houston iPhone Developers' Meetup (April 2012)lqi
The document discusses the importance of code quality and provides tips for improving code quality. It defines code quality as how well software is designed and implemented. It recommends code reviews, static analysis tools like Clang, AppCode, and OCLint to identify code smells and defects. It also discusses refactoring code to improve simplicity, clarity and reduce technical debt. Maintaining high code quality makes software easier to change and evolve over time.
Morph your mindset for Continuous Delivery, Agile Roots 2014lisacrispin
This document outlines an agenda and content for a workshop on continuous integration, continuous delivery, and overcoming obstacles. The workshop includes presentations on key concepts, exercises for participants to collaborate in different roles and provide feedback, and discussions on challenges and experiments to try back at work. The goal is to help participants shift their mindset and learn techniques through interactive exercises to enable continuous delivery of software.
Thailand SPIN: Series 3: กุญแจสู่ความสำเร็จในการเขียนโปรแกรมให้ตรงกับความต้อง...Software Park Thailand
This document summarizes a seminar on problems in software development. It discusses topics like requirements gathering, analysis and design, coding, and testing. It then outlines the agenda for the current seminar, including an introduction, a discussion on writing maintainable code with changes, and a conclusion. Risks of software development failures are presented. Past seminar discussions are recapped. Suggestions for problems in requirements gathering, analysis and design, and coding are provided. Finally, potential discussion topics are listed.
Video game design and programming course for the Master in Computer Engineering at the Politecnico di Milano. http://www.facebook.com/polimigamecollective https://twitter.com/@POLIMIGC http://www.youtube.com/PierLucaLanzi http://www.polimigamecollective.org
Politecnico di Milano, Videogiochi, Video Games, Computer Engineering, game design, game development, sviluppo videogiochi
Slides from the 2016/2017 edition of the Video game Design and Programming course at the Politecnico di Milano. More information at http://www.polimigamecollective.org Some of the video games developed by the students during the course are available at https://polimi-game-collective.itch.io
This document discusses setting up a workflow in JIRA to manage a project producing hundreds of short training videos. It describes the initial problems with the project including a lack of process clarity and visibility. It then outlines how the presenter used JIRA's workflow designer to collaboratively plan and iteratively develop a workflow with stakeholders. Key aspects covered include planning stages and transitions, custom fields for search and reporting, and using dashboards to provide visibility. The benefits of the new workflow for managing the project are highlighted.
This document summarizes career opportunities at Applied Information Services (AIS) including project manager, architect, and designer positions. It also advertises an upcoming Silverlight conference from June 16-18 with a discount for attendees. The document provides an overview of Silverlight and its evolution from earlier user interface technologies. It discusses the balance between reach, rich experiences, and ease of deployment that Silverlight aims to provide as a rich internet application platform.
At the 2014 NI Week in Austin, Texas, DMC engineers from Chicago, Boston and Denver came together to share information about High Speed Vision Systems and the work we do here at DMC.
"How To Race Squirrels" at Develop Conference in Brighton, 21st July 2011Playniac
The document outlines Rob Davis's process for designing and producing commissioned games at his company Playniac. The process involves several stages: developing briefs and proposals, concept development and a written specification, wireframing using use cases and static/interactive wireframes, paper testing, creating game assets, user testing for feedback, balancing the game mechanics, and wrapping up the project. Each stage provides important inputs and refinements to help design fun and balanced games that meet the client's needs.
This document discusses remote usability testing tools that can be used earlier in the product development process. It introduces five types of tools: heat mapping to record user clicks, screen recording of user sessions, user testing tools to simulate lab tests, tools to solicit user feedback, and tools for collaborative peer reviews. Examples of specific tools are provided. The document argues that these remote tools can help improve agility and provide proof of usability issues earlier while shrinking budgets. They allow exploring usability with less invasive testing and inspire innovation through crowd-sourcing. While traditional labs are still useful, these tools provide a less expensive way to engage users and produce credible results.
Agile Software Development in practice: Experience, Tips and Tools from the T...Valerie Puffet-Michel
In the Division of Student Affairs at the University of Connecticut, the Applications Development team has been developing and delivering custom software using agile methods for over four years. In this session, we'll share our experiences and give you a behind the scenes look at how agile software development really works by walking you through how we translate the unique business needs of our clients into deployed software.
To the end of our possibilities with Adaptive User InterfacesJean Vanderdonckt
Slides of the keynote presented at the 1st International Workshop on Human-in-the-Loop Applied Machine Learning (HITLAML '23)
September 04 - 06, 2023 - Belval, Luxembourg.
This presentation summarizes the evolution of techniques used to adapt the user interfaces to the context of use, which is composed of the user, the platform, and the environment.
Engineering the Transition of Interactive Collaborative Software from Cloud C...Jean Vanderdonckt
Paper presented at EICS '22: https://dl.acm.org/doi/10.1145/3532210
The "Software as a Service" (SaaS) model of cloud computing popularized online multiuser collaborative software. Two famous examples of this class of software are Office 365 from Microsoft and Google Workspace. Cloud technology removes the need to install and update the software on end users' computers and provides the necessary underlying infrastructure for online collaboration. However, to provide a good end-user experience, cloud services require an infrastructure able to scale up to the task and allow low-latency interactions with a variety of users worldwide. This is a limiting factor for actors that do not possess such infrastructure. Unlike cloud computing which forgets the computational and interactional capabilities of end users' devices, the edge computing paradigm promises to exploit them as much as possible. To investigate the potential of edge computing over cloud computing, this paper presents a method for engineering interactive collaborative software supported by edge devices for the replacement of cloud computing resources. Our method is able to handle user interface aspects such as connection, execution, migration, and disconnection differently depending on the available technology. We exemplify our approach by developing a distributed Pictionary game deployed in two scenarios: a nonshared scenario where each participant interacts only with their own device and a shared scenario where participants also share a common device, including a TV. After a theoretical comparative study of edge vs. cloud computing, an experiment compares the two implementations to determine their effect on the end user's perceived experience and latency vs. real latency
UsyBus: A Communication Framework among Reusable Agents integrating Eye-Track...Jean Vanderdonckt
Presentation of ACM EICS '22 paper: https://dl.acm.org/doi/10.1145/3532207
Eye movement analysis is a popular method to evaluate whether a user interface meets the users' requirements and abilities. However, with current tools, setting up a usability evaluation with an eye-tracker is resource-consuming, since the areas of interest are defined manually, exhaustively and redefined each time the user interface changes. This process is also error-prone, since eye movement data must be finely synchronised with user interface changes. These issues become more serious when the user interface layout changes dynamically in response to user actions. In addition, current tools do not allow easy integration into interactive applications, and opportunistic code must be written to link these tools to user interfaces. To address these shortcomings and to leverage the capabilities of eye-tracking, we present UsyBus, a communication framework for autonomous, tight coupling among reusable agents. These agents are responsible for collecting data from eye-trackers, analyzing eye movements, and managing communication with other modules of an interactive application. UsyBus allows multiple heterogeneous eye-trackers as input, provides multiple configurable outputs depending on the data to be exploited. Modules exchange data based on the UsyBus communication framework, thus creating a customizable multi-agent architecture. UsyBus application domains range from usability evaluation to gaze interaction applications design. Two case studies, composed of reusable modules from our portfolio, exemplify the implementation of the UsyBus framework.
µV: An Articulation, Rotation, Scaling, and Translation Invariant (ARST) Mult...Jean Vanderdonckt
Paper presented at ACM EICS '22
Finger-based gesture input becomes a major interaction modality for surface computing. Due to the low precision of the finger and the variation in gesture production, multistroke gestures are still challenging to recognize in various setups. In this paper, we present µV, a multistroke gesture recognizer that addresses the properties of articulation, rotation, scaling, and translation invariance by combining $P+'s cloud-matching for articulation invariance with !FTL's local shape distance for RST-invariance. We evaluate µV against five competitive recognizers on MMG, an existing gesture set, and on two new versions for smartphones and tablets, MMG+ and RMMG+, a randomly rotated version on both platforms. µV is significantly more accurate than its predecessors when rotation invariance is required and not significantly inferior when it is not. µV is also significantly faster than others with many samples and not significantly slower with few samples
RepliGES and GEStory: Visual Tools for Systematizing and Consolidating Knowle...Jean Vanderdonckt
The body of knowledge accumulated by gesture elicitation studies (GES), although useful, large, and extensive, is also heterogeneous, scattered in the scientific literature across different venues and fields of research, and difficult to generalize to other contexts of use represented by different gesture types, sensing devices, applications, and user categories. To address such aspects, we introduce RepliGES, a conceptual space that supports (1) replications of gesture elicitation studies to confirm, extend, and complete previous findings, (2) reuse of previously elicited gesture sets to enable new discoveries, and (3) extension and generalization of previous findings with new methods of analysis and for new user populations towards consolidated knowledge of user-defined gestures. Based on RepliGES, we introduce GEStory, an interactive design space and visual tool, to structure, visualize and identify user-defined gestures from a number of 216 published gesture elicitation studies
Gesture-based information systems: from DesignOps to DevOpsJean Vanderdonckt
Keynote address for the 29th International Conference on Information Systems Development ISD'2021 (Valencia, Spain, September 8-10, 2021). See https://isd2021.webs.upv.es/program.php#keynotes
This talk promotes the Seven I':
Implementation continuity
Inclusion of end-users
Interaction first
Integration among stakeholders
Iteration short
Incremental progress
Innovation openness
Intra-platform plasticity regularly assumes that the display of a computing platform remains fixed and rigid during interactions with the platform in contrast to reconfigurable displays, which can change form depending on the context of use. In this paper, we present a model-based approach for designing and deploying graphical user interfaces that support intra-platform plasticity for reconfigurable displays. We instantiate the model for E3Screen, a new device that expands a conventional laptop with two slidable, rotatable, and foldable lateral displays, enabling slidable user interfaces. Based on a UML class diagram as a domain model and a SCRUD list as a task model, we define an abstract user interface as interaction units with a corresponding master-detail design pattern. We then map the abstract user interface to a concrete user interface by applying rules for the reconfiguration, concrete interaction, unit allocation, and widget selection and implement it in JavaScript. In a first experiment, we determine display configurations most preferred by users, which we organize in the form of a state-transition diagram. In a second experiment, we address reconfiguration rules and widget selection rules. A third experiment provides insights into the impact of the lateral displays on a visual search task.
Evaluating Gestural Interaction: Models, Methods, and MeasuresJean Vanderdonckt
The document discusses various methods for evaluating gestural user interfaces (UIs), including comparing a UI to a reference model, collecting evaluation data on usability criteria, and using standardized scales and metrics. Common dimensions for evaluation are goals, utility, usability, and factors like system acceptance, ease of use, and cost. Methods mentioned include observations, questionnaires, heuristic evaluations, and measuring task performance and preferences using standardized scales. Guidelines are provided for designing and assessing the usability of different gestures.
Conducting a Gesture Elicitation Study: How to Get the Best Gestures From Peo...Jean Vanderdonckt
Lecture 3: Conducting a Gesture Elicitation Study: How to Get the Best Gestures From People?
Francqui Chair in Computer Science 2020 VUB, Jean Vanderdonckt, 27 April 2021
This document provides an overview of gestural interaction and various gesture recognition techniques. It begins with definitions of gestures and how they can vary based on factors like the body part used, number of dimensions, whether they are contact-based or not. It then discusses benefits of gestures and examples of gesture recognizers like xStroke and techniques like Rubine, SiGeR, LVS, hidden Markov models, and the $-family of recognizers. The document provides details on properties like stroke, direction, and rotation invariance as well as training and recognition phases for different recognizers.
The document summarizes Jean Vanderdonckt's upcoming lecture on gestural interaction. It will cover the psychological, hardware, software, usage, social and user experience dimensions of gestural interaction. On the psychological dimension, it discusses definitions of gestures and theories of gesture types. On the hardware dimension, it outlines paradigms of contact-based and contact-less gesture interaction. On the software dimension, it provides an overview of gesture recognition algorithms such as Rubine, Siger, LVS and nearest neighbor classification.
User-centred Development of a Clinical Decision-support System for Breast Can...Jean Vanderdonckt
See the paper at https://www.scitepress.org/Link.aspx?doi=10.5220/0010258900600071
We conducted a user-centered design of a clinical decision-support system for breast cancer screening, diagnosis, and reporting based on stroke gestures. We combined knowledge elicitation interviews, scenario-focused questionnaires, and paper mock-ups to understand user needs. Multi-fidelity (low and high) prototypes were designed and compared first in vitro in a usability laboratory, then in vivo in the real world. The resulting user interface provides radiologists with a platform that integrates domain-oriented tools for the visualization of mammograms, the manual, and the semi-automatic annotation of breast cancer findings based on stroke gestures. The contribution of this work lies in that, to the best of our knowledge, stroke gestures have not yet been applied to the annotation of mammograms. On the one hand, although there is a substantial amount of research done in stroke-based interaction, none focuses especially on the domain of breast cancer annotation. On the other hand, typical gestures in breast cancer annotation tools are those with a keyboard and a mouse
Simplifying the Development of Cross-Platform Web User Interfaces by Collabo...Jean Vanderdonckt
Ensuring responsive design of web applications requires their user interfaces to be able to adapt according to different contexts of use, which subsume the end users, the devices and platforms used to carry out the interactive tasks, and also the environment in which they occur. To address the challenges posed by responsive design, aiming to simplify their development by factoring out the common parts from the specific ones, this paper presents Quill, a web-based development environment that enables various stakeholders of a web application to collaboratively adopt a model-based design of the user interface for cross-platform deployment. The paper establishes a series of requirements for collaborative model-based design of cross-platform web user interfaces motivated by the literature, observational and situational design. It then elaborates on potential solutions that satisfy these requirements and explains the solution selected for Quill. A user survey has been conducted to determine how stakeholders appreciate model-based design user interface and how they estimate the importance of the requirements that lead to Quill
Detachable user interfaces consist of graphical user interfaces whose parts or whole can be detached at run-time from their host, migrated onto an- other computing platform while carrying out the task, possibly adapted to the new platform and attached to the target platform in a peer-to-peer fashion. De- taching is the property of splitting a part of a UI for transferring it onto another platform. AttAaching is the reciprocal property: a part of an existing interface can be attached to the currently being used interface so as to recompose another one on-demand, according to user's needs, task requirements. Assembling inter- face parts by detaching and attaching allows dynamically composing, decom- posing and re-composing new interfaces on demand. To support this interaction paradigm, a development infrastructure has been developed based on a series of primitives such as display, undisplay, copy, expose, return, transfer, delegate, and switch. We exemplify it with QTkDraw, a painting application with attach- ing and detaching based on the development infrastructure.
The Impact of Comfortable Viewing Positions on Smart TV GesturesJean Vanderdonckt
Whereas gesture elicitation studies for TV interaction
assume that participants adopt an upright, frontal viewing
position, we asked 21 participants to hold a natural, comfortable
viewing position, the posture they adopt when watching TV
at home. By involving a broad selection of users regarding
age, profession, our study targets a higher ecological validity
than in existing studies. Agreements rates were lower than existing studies using an upright, frontal viewing position. Participants experienced problems due to (1) having to use their slave hand instead of their dominant hand, (2) being in a certain orientation with their head making it more difficult to perform some physical movements, and (3) being hindered in their movement by the sofa there lay on. Since each person may have a different
position inducing different gestures due to the aforementioned
problems, the effect of a comfortable viewing position is analyzed
by comparison to gestures for a frontal position.
Head and Shoulders Gestures: Exploring User-Defined Gestures with Upper BodyJean Vanderdonckt
This paper presents empirical results about user-dened gestures
for head and shoulders by analyzing 308 gestures elicited from 22 participants for 14 referents materializing 14 different types of tasks in IoT context of use. We report an overall medium consensus but with medium variance (mean: .263, min: .138, max: .390 on the unit scale) between participants gesture proposals, while their thinking time were less similar (min: 2.45 sec, max: 22.50 sec), which suggests that head and shoulders gestures are not all equally easy to imagine and to produce. We point to the challenges of deciding which head and shoulders gestures
will become the consensus set based on four criteria: the agreement rate, their individual frequency, their associative frequency, and their unicity.
Paper accessible at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A213794
G-Menu: A Keyword-by-Gesture based Dynamic Menu Interface for SmartphonesJean Vanderdonckt
Instead of relying on graphical or vocal modalities for searching
an item by keyword (called K-Menu), this paper presents the G-Menu exploiting gesture interaction and gesture recognition: when a user sketches a keyword by gesturing the first letters of its label, a menu with items related to the recognized letters is constructed dynamically and presented to the user for selection and auto-completion. The selection can be completed either gesturally by an appropriate gesture (called the G-Menu) or by touch only (called the T-Menu). This paper compares the three types of menu, i.e., by keyword, by gesture, and by touching, in a user study with twenty participants on their item selection time (for measuring task efficiency), their error rate (for measuring task effectiveness),
and their subjective satisfaction (for measuring user satisfaction).
Paper accessible at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A213790
Unistroke and multistroke gesture recognizers have always striven to reach some robustness with respect to
all variations encountered when people issue gestures by hand
on touch surfaces or with sensing devices. For this purpose,
successful stroke recognizers rely on a gesture recognition
algorithm that satisfies a series of invariance properties such
as: stroke-order invariance, stroke-number invariance, stroke direction invariance, position, scale, and rotation invariance.
Before initiating any recognition activity, these algorithms
ensure these properties by performing several pre-processing
operations. These operations induce an additional computational
cost to the recognition process, as well as a potential error
bias. To cope with this problem, we introduce an algorithm that
ensures all these properties analytically instead of statistically
based on a vector algebra. Instead of points, the recognition
algorithm works on vectors between vectors. We demonstrate
that this approach not eliminates the need for these preprocessing
operations but also satisfies an entire structure preserving
transformation.
Paper available at https://dial.uclouvain.be/pr/boreal/en/object/boreal%3A217006
Body-based gestures, such as acquired by Kinect sensor, today benefit from efficient tools for their recognition and development, but less for automated reasoning. To facilitate this activity, an ontology for structuring body-based gestures, based on user, body and body parts, gestures, and environment, is designed and encoded in Ontology Web Language according to modelling triples (subject, predicate, object). As a proof-of-concept and to feed this ontology, a gesture elicitation study collected 24 participants X 19 referents for IoT tasks = 456 elicited body-based gestures, which were classified and expressed according to the ontology.
See paper at https://dl.acm.org/citation.cfm?id=3328238
Global Collaboration for Space Exploration.pdfSachin Chitre
Distinguished readers, leaders, esteemed colleagues, and fellow dreamers,
We stand at the precipice of a new era, an epoch where the boundaries of human potential are poised to be redefined. For centuries, humanity has gazed up at the celestial expanse, yearning to explore the cosmic mysteries that beckon us.
Today, I present a vision, a blueprint for a journey that transcends the limitations of conventional science and technology.
Imagine a world where the shackles of gravity are broken, where interstellar travel is no longer confined to the realms of science fiction. A world united not by petty differences, but by a shared purpose – to explore, to discover, and to elevate humanity.
This presentation outlines a comprehensive research project to construct and deploy Vimanas – ancient, aerial vehicles of wisdom and power. By harnessing the knowledge of our ancestors and the advancements of modern science, we can embark on a quest to not only conquer the skies but to conquer the cosmos.
Let us together ignite the spark of human ingenuity and propel our civilization towards a future where the stars are within our reach and where the bonds of humanity are strengthened through shared exploration.
The time for action is now. Let us embark on this extraordinary journey together."
Using ScyllaDB for Real-Time Write-Heavy WorkloadsScyllaDB
Keeping latencies low for highly concurrent, intensive data ingestion
ScyllaDB’s “sweet spot” is workloads over 50K operations per second that require predictably low (e.g., single-digit millisecond) latency. And its unique architecture makes it particularly valuable for the real-time write-heavy workloads such as those commonly found in IoT, logging systems, real-time analytics, and order processing.
Join ScyllaDB technical director Felipe Cardeneti Mendes and principal field engineer, Lubos Kosco to learn about:
- Common challenges that arise with real-time write-heavy workloads
- The tradeoffs teams face and tips for negotiating them
- ScyllaDB architectural elements that support real-time write-heavy workloads
- How your peers are using ScyllaDB with similar workloads
How CXAI Toolkit uses RAG for Intelligent Q&AZilliz
Manasi will be talking about RAG and how CXAI Toolkit uses RAG for Intelligent Q&A. She will go over what sets CXAI Toolkit's Intelligent Q&A apart from other Q&A systems, and how our trusted AI layer keeps customer data safe. She will also share some current challenges being faced by the team.
Flame emission spectroscopy is an instrument used to determine concentration of metal ions in sample. Flame provide energy for excitation atoms introduced into flame. It involve components like sample delivery system, burner, sample, mirror, slits, monochromator, filter, detector (photomultiplier tube and photo tube detector). There are many interference involved during analysis of sample like spectral interference, ionisation interference, chemical interference ect. It can be used for both quantitative and qualitative study, determine lead in petrol, determine alkali and alkaline earth metal, determine fertilizer requirement for soil.
TrustArc Webinar - Innovating with TRUSTe Responsible AI CertificationTrustArc
In a landmark year marked by significant AI advancements, it’s vital to prioritize transparency, accountability, and respect for privacy rights with your AI innovation.
Learn how to navigate the shifting AI landscape with our innovative solution TRUSTe Responsible AI Certification, the first AI certification designed for data protection and privacy. Crafted by a team with 10,000+ privacy certifications issued, this framework integrated industry standards and laws for responsible AI governance.
This webinar will review:
- How compliance can play a role in the development and deployment of AI systems
- How to model trust and transparency across products and services
- How to save time and work smarter in understanding regulatory obligations, including AI
- How to operationalize and deploy AI governance best practices in your organization
Discover practical tips and tricks for streamlining your Marketo programs from end to end. Whether you're new to Marketo or looking to enhance your existing processes, our expert speakers will provide insights and strategies you can implement right away.
Jacquard Fabric Explained: Origins, Characteristics, and Usesldtexsolbl
In this presentation, we’ll dive into the fascinating world of Jacquard fabric. We start by exploring what makes Jacquard fabric so special. It’s known for its beautiful, complex patterns that are woven into the fabric thanks to a clever machine called the Jacquard loom, invented by Joseph Marie Jacquard back in 1804. This loom uses either punched cards or modern digital controls to handle each thread separately, allowing for intricate designs that were once impossible to create by hand.
Next, we’ll look at the unique characteristics of Jacquard fabric and the different types you might encounter. From the luxurious brocade, often used in fancy clothing and home décor, to the elegant damask with its reversible patterns, and the artistic tapestry, each type of Jacquard fabric has its own special qualities. We’ll show you how these fabrics are used in everyday items like curtains, cushions, and even artworks, making them both functional and stylish.
Moving on, we’ll discuss how technology has changed Jacquard fabric production. Here, LD Texsol takes center stage. As a leading manufacturer and exporter of electronic Jacquard looms, LD Texsol is helping to modernize the weaving process. Their advanced technology makes it easier to create even more precise and complex patterns, and also helps make the production process more efficient and environmentally friendly.
Finally, we’ll wrap up by summarizing the key points and highlighting the exciting future of Jacquard fabric. Thanks to innovations from companies like LD Texsol, Jacquard fabric continues to evolve and impress, blending traditional techniques with cutting-edge technology. We hope this presentation gives you a clear picture of how Jacquard fabric has developed and where it’s headed in the future.
Selling software today doesn’t look anything like it did a few years ago. Especially software that runs inside a customer environment. Dreamfactory has used Anchore and Ask Sage to achieve compliance in a record time. Reducing attack surface to keep vulnerability counts low, and configuring automation to meet those compliance requirements. After achieving compliance, they are keeping up to date with Anchore Enterprise in their CI/CD pipelines.
The CEO of Ask Sage, Nic Chaillan, the CEO of Dreamfactory Terence Bennet, and Anchore’s VP of Security Josh Bressers are going to discuss these hard problems.
In this webinar we will cover:
- The standards Dreamfactory decided to use for their compliance efforts
- How Dreamfactory used Ask Sage to collect and write up their evidence
- How Dreamfactory used Anchore Enterprise to help achieve their compliance needs
- How Dreamfactory is using automation to stay in compliance continuously
- How reducing attack surface can lower vulnerability findings
- How you can apply these principles in your own environment
When you do security right, they won’t know you’ve done anything at all!
The Challenge of Interpretability in Generative AI Models.pdfSara Kroft
Navigating the intricacies of generative AI models reveals a pressing challenge: interpretability. Our blog delves into the complexities of understanding how these advanced models make decisions, shedding light on the mechanisms behind their outputs. Explore the latest research, practical implications, and ethical considerations, as we unravel the opaque processes that drive generative AI. Join us in this insightful journey to demystify the black box of artificial intelligence.
Dive into the complexities of generative AI with our blog on interpretability. Find out why making AI models understandable is key to trust and ethical use and discover current efforts to tackle this big challenge.
Multimodal Embeddings (continued) - South Bay Meetup SlidesZilliz
Frank Liu will walk through the history of embeddings and how we got to the cool embedding models used today. He'll end with a demo on how multimodal RAG is used.
Leading Bigcommerce Development Services for Online RetailersSynapseIndia
As a leading provider of Bigcommerce development services, we specialize in creating powerful, user-friendly e-commerce solutions. Our services help online retailers increase sales and improve customer satisfaction.
UiPath Community Day Amsterdam: Code, Collaborate, ConnectUiPathCommunity
Welcome to our third live UiPath Community Day Amsterdam! Come join us for a half-day of networking and UiPath Platform deep-dives, for devs and non-devs alike, in the middle of summer ☀.
📕 Agenda:
12:30 Welcome Coffee/Light Lunch ☕
13:00 Event opening speech
Ebert Knol, Managing Partner, Tacstone Technology
Jonathan Smith, UiPath MVP, RPA Lead, Ciphix
Cristina Vidu, Senior Marketing Manager, UiPath Community EMEA
Dion Mes, Principal Sales Engineer, UiPath
13:15 ASML: RPA as Tactical Automation
Tactical robotic process automation for solving short-term challenges, while establishing standard and re-usable interfaces that fit IT's long-term goals and objectives.
Yannic Suurmeijer, System Architect, ASML
13:30 PostNL: an insight into RPA at PostNL
Showcasing the solutions our automations have provided, the challenges we’ve faced, and the best practices we’ve developed to support our logistics operations.
Leonard Renne, RPA Developer, PostNL
13:45 Break (30')
14:15 Breakout Sessions: Round 1
Modern Document Understanding in the cloud platform: AI-driven UiPath Document Understanding
Mike Bos, Senior Automation Developer, Tacstone Technology
Process Orchestration: scale up and have your Robots work in harmony
Jon Smith, UiPath MVP, RPA Lead, Ciphix
UiPath Integration Service: connect applications, leverage prebuilt connectors, and set up customer connectors
Johans Brink, CTO, MvR digital workforce
15:00 Breakout Sessions: Round 2
Automation, and GenAI: practical use cases for value generation
Thomas Janssen, UiPath MVP, Senior Automation Developer, Automation Heroes
Human in the Loop/Action Center
Dion Mes, Principal Sales Engineer @UiPath
Improving development with coded workflows
Idris Janszen, Technical Consultant, Ilionx
15:45 End remarks
16:00 Community fun games, sharing knowledge, drinks, and bites 🍻
Generative AI technology is a fascinating field that focuses on creating comp...Nohoax Kanont
Generative AI technology is a fascinating field that focuses on creating computer models capable of generating new, original content. It leverages the power of large language models, neural networks, and machine learning to produce content that can mimic human creativity. This technology has seen a surge in innovation and adoption since the introduction of ChatGPT in 2022, leading to significant productivity benefits across various industries. With its ability to generate text, images, video, and audio, generative AI is transforming how we interact with technology and the types of tasks that can be automated.
1. Assessing Lag
Perception in
Electronic Sketching
Ugo Braga Sangiorgi
Vivian G. Motti
François Beuvens
Jean Vanderdonckt
Louvain Interaction Laboratory,
Université catholique de Louvain - Belgium
2. Agenda
• The lag problem
• Sketching on many devices
• Experiment
• Results
• Conclusion
2
8. Sketching on many devices
• HTML5
• 1.4 devices per person
by 2016 1
1 - CISCO‟s Visual Networking Index
http://www.ciscovni.com/vni_forecast/index.htm
8
9. Sketching on many devices
• HTML5
• 1.4 devices per person
by 2016 1
• Design interfaces
o Diverse context
o Weight, screen types
o E-paper?
1 - CISCO‟s Visual Networking Index
http://www.ciscovni.com/vni_forecast/index.htm
9
12. Why?
• Developers would know what to do
o Respond accordingly
o Keep the refresh rate low
• Discard some devices
o Activities
o User profiles
12
13. Experiment
• What is the FPS rate at which the users start to
perceive the system as „too slow‟?
• Three phases
• Draw squares and grade the speed.
13
14. Subjects
• 35 users recruited around the campus.
o 16 women, 19 men
o Avg. 28 years old
• Different nationalities
• Different fields of expertise
o Biology, Psychology, Computer Science, Management, etc.
14
15. Material
• Wacom Cintiq12
o 75 Hz (upper limit of 75 FPS)
• MacBook pro 2.9GHz
• http://exp.gambit-sketch.appspot.com (be nice)
15
22. 2nd phase
• Likert scale grading
• “How happy are
you with this
response rate?”
• Goal: find out when
the users start to
grade as “bad” and
“really bad”
22
29. Findings
• The range below 20 FPS was rejected by users;
• The difference between grades is not significant
above 24 FPS
• No conclusive evidence that subjects perceived
differences of 2, 5 and 10 FPS when testing pairs of
rates
29
30. Next steps
• Writing activity
• Specific types of
professionals
o Designers
o Architects
o Non-drawing professionals
30
This paper is entitled Assessing Lag Perception in Electronic Sketching
So this is the agenda of this presentation. I will talk about the lag problemAnd the motivation which is Sketching on different devicesThen I will talk about the experiment we conducted to address the problem into that context,Our findings and conclusion
This paper was motivated by the question “What makes a device suitable or unsuitable for sketching?”. There are probably a lot of factors we could analyze to answer that question, one of them is LAG
What is LAG? I have prepared a video to explain it.I will refer to Lag as the moment between the actual stroke drawing and what the system underneath is capable of rendering.I will use FPS to refer to the speed in frames per second. The different speeds we are using here are set using the scrollbar at the top, and we can see how the rendering differs.This difference can occur in different platforms, because they have different processing capabilities or due to other processes running on the system,
So lag In electronic sketching is composed by mainly three parts, first the input latency: the time it takes for the device to capture the movementThen the hardware feedback: its the refresh rate usually expressed in HZ, for instance I have in my laptop screen a refresh rate of 60 HZ. That gives us a practical upper limit of 60 Frames per second.And the software feedback, which we can control programatically, This component is the focus of this work.
So how to control the refresh rate in a program? We divide one second by the desired ammount of frames per second, then we know how often we have to refresh the screen. This is a very simple function in Javascript that updates the screen based on a set of captured points, each 33 miliseconds.We are trying to discover with this work what is the lower possible number for the sketching activity to be comfortable for the users.
This paper was motivated by the research im currently conducting on sketching in a multi-platform context. We could call it cross-device sketching. Its possible to create a system that offers flexibility for designers to use whatever device they want So they can sketch and prototype interfaces. The goal is to also allow designers to test solutions directly on the device they want the application to be used. And I’m using HTML5 for doing that. So if a device has a browser, a designer could use it.
There is an estimate made by CISCO that by the end of 2016 there would be more devices than people in the world.
That creates a huge pressure on Designers, that will need to address a lot more things than today. Those devices will be tablets, smartphones, foldable-display devices, who knows?So by using a cross-device system for prototyping, a designer would be able to perceive if an idea would work or not on a specific device, or in specific context, some sort of in the wild prototyping
We know that the devices will perform differently, for a same system. Devices might have different processors, different operating systems. The question is “Can we use those devices for sketching?”
We ran a very simple benchmark on some devices available today to see how they differ. We’ve performed a function that drew one thousand lines three times and calculated the average.We set the refresh rate to a very high value and calculated how many frames per second the device was actually able to render.This chart shows in the X axis 0 to 1000 strokes and the Y axis shows how many fps each device was able to display.We can see that there is a difference between mobile devices and desktops, as we would expect. We didn’t expect this difference to be that big.
Why is it important to identify a minimum fps rate?Developers would know how to make the sketching application respond when the performance drops to an unnaceptable rate, eventually or even maintain the refresh rate as low as possible, saving battery for instanceOr we can even discard some devices for specific types of activity or user profiles if they are not able to refresh the screen at an acceptable rate.
We used a Wacom Cintiq with a refresh rate of 75Hz, so qw could vary the FPS response from 0 to 75 FPS.
We separated the subjects into two groups, based on the average “points per second” they produced. In other words, we separated them by their drawing speed.We’ve calculated the speed based on all the samples they produced during the experiment.Since the subjects had different profiles, we thought that the speed would vary, So This helped us to better analyze the data.
In the first phase we asked subjects to draw a square on each part of the screen and then click the buttons EQUAL or DIFFERENT. Each screen had a different FPS value, chosen in a random order.The goal was to find out how subjects would perceive differences of 2, 5 and 10 FPS. Our hypotesis is that subjects would perceive big differences more often than small differences.
Those are the comparisons made for the different FPS ranges. They vary from 0 to 75 FPS since this was the physical limit of the display.
The subjects had to draw a square on each part of the screen and click a button that said EQUAL or DIFFERENT
This chart shows how many times subjects clicked DIFFERENT for the different pairs of FPS. In the Y axis.I the X axis show the FPS pairs (2, 5 and 10) and the groups A and B
The result of this phase was not conclusive in the sense that we cannot say that subjects perceive differences of 10 FPS more often than 2 FPS. Perhaps what we thought was a big difference in the end was too small to be perceived. The difference was not significative between FPS groups nor between FAST and SLOW groups
However the second phase gave us very sound results. It was based on Likert scale grading of the fps rates. Subjects would asked to raw a square and grade the speed presented by the system. The grades were “Really BAD, BAD, NEUTRAL, GOOD and REALLY GOOD” . As you can see in the small faces on top.The question was “how happy are you with this response rate?”We expected subjects to grade as BAD the lower range of FPS and as GOOD the high FPS’s.Our hypotesis was that there would be a given range of FPS in which subjects would stop grading good and really good, that would mean there is a “turning point”
The FPS rates were shown in a random order.
Each grade had a value from 1 to 5 (in the Y axis) and we can see that the grades increased with the FPS values, in the X axis. This is a consistent result and confirmed the hypotesis that lower FPS values would receive lower grades, and high FPS high grades.Now we are interested into knowing when the grades started to stabilize, that would mean “from this FPS on, there is not a significant difference”
So we summed up the values neutral, good and really good and this is what we got. We can see that the acceptance result varies a lot between 0 and 24, and from that point on users are pretty much satisfied. So we would have a range of aceptance from 24 FPS on.
Another interesting result is that the slow group gave more good grades than the fast group. The fast group was less satisfied with the FPS rates than the slow group.
The third phase was the active selection. Here subjects could control the speed, like in the first video I showed, with the scrollbar.We asked “Whats the minimim in your opinion?” This was totally subjectiveand the goal was to join the results with the second phase.
In the Y axis we have the groups, and in the X axis we have the FPS values they have chosen.Again, we see the consistency, the slow group were really satisfied with lower FPS rates, from 20 to 34.And the fast group were satisfied with a higher fps range, of 23 to 41.When we joined the two groups, we got an overall minimum result of 20 FPS excluding the outliers.
So the summary of the findings is that “the range below 20 fps was rejected, and the grades were not significantly different above 24 FPS”That would give us a range of 20 to 24 FPS as a minimum.And based on the first phase, we found no conclusive evidence that subjects perceived differences of 2, 5 and 10.