Presented by Laili Irani, Senior Policy Analyst for the Population Reference Bureau, as part of the Measuring Success Toolkit webinar in September 2012.
This document provides an overview of policy and policy analysis. It defines policy as statements that guide decision making and actions. Public policy refers to actions taken by government to address problems. Policy analysis involves investigating and producing information to evaluate policy options using multiple methods. There are two major fields - analysis of existing policy and analysis for new policy. The document also outlines various approaches, methodologies, and dimensions for analyzing policies, including effectiveness, unintended effects, equity, cost, feasibility, and acceptability.
Monitoring and evaluation provide real-time information on project implementation and more in-depth assessments, respectively. Monitoring checks progress toward goals and identifies issues to inform adjustments, while evaluation assesses what worked and didn't work independently. Both are integral to program management. Effective monitoring and evaluation establish what will be monitored and evaluated, responsibilities, methods, resources, and timing of activities to validate the program's logic and encourage improvements.
Policy evaluation determines the effectiveness and efficiency of government policies by systematically collecting and analyzing information. It aims to assess whether social interventions have achieved intended results, though results are not always well received. There are two main types of policy evaluation - formative evaluates program operations for improvement, while summative measures achievement of goals. The evaluation process involves defining the purpose and scope, specifying an appropriate design, creating a data collection plan, collecting and analyzing data, drawing conclusions, and providing feedback for program improvement. Policy evaluation uses social science research methods to examine policy effects.
The document discusses stakeholder analysis, which involves systematically identifying and assessing individuals, groups, or organizations that may be affected by a project. It outlines the stakeholder analysis process, including identifying key stakeholders, understanding their interests and level of influence, and developing engagement strategies. Tools for stakeholder analysis include stakeholder matrices to map stakeholders based on their impact, interest, and relationship to the project. The document provides an example stakeholder analysis table to collect information on stakeholders.
Project monitoring and evaluation involves collecting data on project processes, outputs, and outcomes to track progress and inform stakeholders. Monitoring is continuous and internal, while evaluation is periodic and can be internal or external. The key aspects of monitoring include tracking inputs, activities, the process, and outputs, while evaluation assesses outcomes, impacts, efficiency, effectiveness and sustainability. Both use qualitative and quantitative data and involve stakeholders. Participatory monitoring and evaluation engages local people and beneficiaries to better understand impacts and ensure the process is learning-focused and adaptive.
Monitoring and evaluation Learning and DevelopmentSESH SUKHDEO
The document outlines steps for monitoring and evaluation (M&E) including: understanding existing documentation and systems, gathering M&E information, enhancing stakeholder buy-in, preparing a detailed M&E plan with indicators, baselines and targets, establishing an implementation team, providing training, and closely monitoring implementation against indicators. It also discusses key concepts in M&E like the difference between monitoring and evaluation, levels of evaluation, and participatory M&E.
6 M&E - Monitoring and Evaluation of Aid ProjectsTony
A series of course modules on project cycle, planning and the logical framework, aimed at team leaders of international NGOs in developing countries.
This is part 6 of 11, beginning with 2 modules on leadership and conflict resolution, then 9 modules on project cycle management.
This module has 3 handouts and presenter notes as separate documents.
Sample Proposal: http://www.slideshare.net/Makewa/6-watsan-training-sample-proposal-09
Slides as a handout: http://www.slideshare.net/Makewa/6-me-handout
Presenter notes: http://www.slideshare.net/Makewa/6-module-6-presenter-notes
The document discusses monitoring and evaluation (M&E) of health programs, defining monitoring as the routine collection of data to track progress towards objectives, while evaluation assesses the impact of a program by measuring outcomes at baseline and endline using a control group. It provides guidance on developing M&E plans, including describing programs and expected outcomes, identifying indicators, data collection sources and schedules, and disseminating findings to inform decision-making.
The document outlines an M&E training to be held at the Travellers Beach Hotel. The training will cover methods of data collection, organization, analysis, reporting and presentation of M&E results. Key topics will include project control tools like Gantt charts, milestone charts, and earned value analysis which compares planned to actual performance to monitor project progress. The overall goal is to help participants explore best practices for monitoring and evaluating projects.
This document provides an overview of policy analysis, outlining several key points:
- It defines policy analysis and describes it as a process used to determine what a policy will or has achieved. Approaches include descriptive analysis of existing policies and prescriptive analysis to formulate new policies.
- The importance of policy analysis is highlighted, such as assessing situations, seeking acceptance, providing opportunities for modification, and facilitating evidence-based decision making.
- Several models of policy analysis are described, including process, substantive, eightfold path, logical-positivist, and participatory policy analysis.
- The use of indicators and outcomes to evaluate policies is discussed, noting they can measure results at the population, agency
Monitoring and Evaluation of Health ServicesNayyar Kazmi
This document provides an overview of monitoring and evaluation (M&E) of health services. It discusses the key differences between monitoring and evaluation, and explains that M&E is important to assess whether health programs and services are achieving their goals and objectives. The document also outlines the main components and steps involved in conducting evaluations, including developing indicators, collecting and analyzing data, reporting findings, and implementing recommendations.
The document discusses policy implementation, which involves carrying out the activities designed by the legislative branch to achieve its policy goals. This includes establishing and staffing new agencies or assigning new responsibilities to existing agencies. The implementing agencies then translate the legislative intent into operational rules and guidelines, and coordinate resources and personnel to achieve the intended goals.
Program implementation refers to carrying out proposed activities and interventions in practice to achieve planned objectives and results. It depends on factors like an organized project team and monitoring progress and spending. Overall management is led by a project manager from the lead partner, who must have an efficient system and remain flexible to current needs and changes from initial plans, while still delivering quality results agreed upon with partners.
This document provides an overview of monitoring and evaluation (M&E) processes at Room to Read. It discusses key M&E concepts like indicators, data collection, and the Global Solutions Database. It also outlines Room to Read's approach to M&E, including defining goals and objectives, collecting and analyzing global and country-specific indicators, ensuring data quality, and using M&E data to track progress and improve programs. Examples of indicators for different Room to Read programs like reading rooms and girls' education are also presented.
The document outlines the objectives, principles, content areas and task levels of the Division Monitoring and Evaluation framework. The key points are:
1. The objectives of the framework are to provide management information to improve education service delivery, implement projects and programs effectively, allocate resources appropriately, and assess organizational performance.
2. Principles of the framework include ensuring quality information, strengthening existing systems, achieving results efficiently, transparency, synergy between entities, and using M&E for continuous learning and accountability.
3. Content areas of focus for M&E in the division are delivery of education services, educational programs/projects, curriculum implementation, technical assistance, resources, and organizational effectiveness and support.
The document provides an overview of the logical framework approach (LFA), including its history, key concepts, and uses. It describes the LFA as a systematic planning process used in project design and management. The LFA involves analysis and planning phases. During analysis, problems are identified, objectives are set, and strategies are analyzed. In planning, objectives and their indicators are organized into a logical framework matrix. The matrix lays out the project's goals, objectives, outputs, activities, and assumptions to provide a framework for monitoring and evaluation. The LFA is a tool used widely by development organizations to improve project design, management, and assessment.
Strategic Planning Models by Dr. Eusebio F. Miclat Jr. Development Planning &...Jo Balucanag - Bitonio
The document discusses various planning models and concepts:
1. It describes several planning models including rational, incremental, transactive, advocacy, and radical models.
2. It also summarizes different planning process models including situational analysis, goal setting, policy formulation, project identification, implementation, and evaluation.
3. Key components of planning are identified as information inputs, planning tools, organization, activities, and outputs which include plans, strategies, and performance evaluation.
The document summarizes strategies used by the MEASURE Evaluation project to disseminate and promote the use of results from evaluations of orphan and vulnerable children (OVC) programs. A comprehensive data use strategy involved stakeholders throughout the research process to ensure collection of relevant data and uptake of findings. Key results were packaged and disseminated in various formats to diverse audiences. Workshops with OVC program staff and national stakeholders in Tanzania facilitated discussion of findings and development of action plans to apply results for program improvement and decision-making.
Transitioning from reach every district to reach every communityJSI
The presentation describes the expansion for routine immunization from district level to community level in Africa. Reaching remote communities is important to bring immunization to all children.
This summary provides an overview of 3 implementation research studies on integrated community case management (iCCM) conducted by the University Research Co., LLC.
The first study analyzed iCCM policies in 6 countries to understand how policy context, actors, and processes influence iCCM implementation. It found that policies often did not explicitly mention iCCM and were developed with technical staff but lacked engagement from key stakeholders. External funding was critical for policy development. The second study developed an iCCM costing and financing tool to help countries estimate costs and plan long-term financing. It was tested in Malawi and Senegal. The third study examined an iCCM monitoring improvement project in an unnamed country. Overall, the studies provide insights into real-
Setting the scene – Trends in programming Research and Innovation for Impact Francois Stepman
6 April 2018. Rome. The SCAR Strategic Working Groups ARCH, AKIS and Food Systems organised jointly the Workshop: Programming Research and Innovation for Improved Impact
Presentation by Paul Winter
Participatory Monitoring and Evaluation (PM&E) is a social process that ensures stakeholder participation to monitor and evaluate program activities. It involves community members in monitoring activities and in the design and execution of evaluations. The key principles of PM&E are participation, learning through negotiation and flexibility. PM&E methods include both informal conversations and formal structured tools. PM&E ensures ownership, accountability and empowerment while improving information for strategic planning.
Lessons Learned In Using the Most Significant Change Technique in EvaluationMEASURE Evaluation
This document summarizes lessons learned from using the Most Significant Change (MSC) technique in evaluations conducted in five countries. The MSC technique involves collecting stories from participants about significant changes resulting from an intervention, analyzing the stories to identify themes, and sharing the stories with stakeholders. The document discusses strengths and limitations of MSC, provides examples of its application in different programs and countries, and identifies lessons learned. Key lessons are that MSC generates rich qualitative data but requires careful facilitation and training, and follow-up interviews can further strengthen learning from the approach.
The document discusses monitoring and evaluation (M&E) of public health projects. It provides examples of M&E plans that include objectives, interventions, indicators, targets, data collection methods, frequencies, and responsibilities. Specifically, it shows how to develop a monitoring plan by determining what to monitor, how to collect data, who is involved, resources needed, and creating a workplan. It also distinguishes between monitoring and evaluation and provides templates to plan for monitoring achievement of outputs and progress toward objectives. The key aspects of M&E planning discussed are tying indicators to objectives and interventions, establishing data collection methods and responsibilities, and monitoring on a regular basis to track progress.
This document summarizes the Tote Board's approach to impact measurement for its Enabling Lives Initiative (TB-ELI) grant program. The TB-ELI aims to improve the quality of life of persons with disabilities and their caregivers. Impact will be measured at two levels: at the project level to determine if individual projects met their intended outcomes, and at the program level to evaluate the effectiveness of the collective impact model and measure the overall impact of the TB-ELI grant program through both a process evaluation and assessment of the difference made to the disability landscape in Singapore.
This document provides guidance on developing monitoring plans for public health projects and activities. It discusses the importance of monitoring and evaluation for demonstrating impact, accountability and improving future work. Key points covered include:
- Monitoring should track a limited number of key indicators to collect minimal but useful information. Both quantitative and qualitative data should be gathered through routine systems and field visits.
- A monitoring plan should specify what will be monitored, how through appropriate methods, who will be responsible for collection and reporting, and when through determined frequencies. Baseline data and indicators need to be established.
- Examples of monitoring plans for different types of interventions are provided, including sectoral, integrated IEC, value-added, and linked interventions.
The document summarizes the Health Policy Research Group's (HPRG) experiences getting research into policy and practice in Nigeria. It identifies four models or strategies used by HPRG: 1) Policymakers seeking evidence from researchers, 2) Involving stakeholders throughout the research process, 3) Facilitating engagement between researchers and policymakers, and 4) Active dissemination of findings. Interviews with stakeholders found that approaches involving collaboration and active dissemination were most effective at influencing policy. Key enablers included policymaker willingness to consider research findings even if they contradicted existing policies.
The document summarizes the experiences of the Health Policy Research Group (HPRG) in Nigeria in seeking to bridge the gap between researchers and policymakers. It outlines four models that emerged from HPRG's work: 1) policymakers seeking evidence from researchers, 2) involving stakeholders throughout the research process, 3) facilitating engagement between researchers and policymakers, and 4) actively disseminating research findings. It also discusses enabling factors like trust and credibility, and challenges such as lack of policymaker capacity and political influences. The conclusion is that context-specific strategies are needed to educate policymakers and influence domestic policies using research.
This document provides tools and guidance to help the nonprofit organization Chintan develop an advocacy strategy and measure the impact of its advocacy work. It begins with a summary of case studies on waste management challenges and best practices. It then outlines an advocacy plan involving political pressure, strategic relationships, and social media. A strategic advocacy framework is presented, including developing a theory of change and monitoring and evaluation plan. Finally, it discusses approaches to measuring social impact through quantitative and qualitative methods like surveys, interviews and focus groups. Recommendations include creating a theory of change and focusing on partnerships and organizational learning.
David Pelletier, Associate Professor of Nutrition Policy Division of Nutritio...SUN_Movement
This document discusses building multisectoral nutrition systems in Africa through the African Nutrition Security Partnership (ANSP). It provides an overview of ANSP's objectives to reduce stunting through policy development, capacity building, information systems, and scaling up interventions. It then discusses conceptualizing multisectoral nutrition as a complex system and presents tools and strategies for building functional multisectoral nutrition structures, including sensitizing concepts, knowledge brokering, and lessons learned across countries.
Community, rights, gender and the new funding modelclac.cab
The document discusses principles of the Global Fund's new funding model, including focusing funding on countries with the highest disease burden and lowest ability to pay. It outlines the funding cycle and concept note structure, emphasizing national strategic plans as the basis for funding requests. It also covers preparing for the new model, including minimum standards for implementers and the modular approach to structuring grants. Community systems strengthening is discussed as supporting service provision, accountability, and mobilization. Human rights are integrated in the Global Fund's strategy through addressing barriers to access and ensuring funding does not violate rights.
Crg presentation to technical assistance providersclac.cab
The document provides an overview of key concepts related to human rights and the Global Fund's approach. It discusses the main international human rights treaties and conventions that form the basis of human rights standards. It outlines the main civil/political rights and economic/social/cultural rights that states have obligations to respect, protect, and fulfill. It also discusses the right to health and key principles like availability, accessibility, acceptability, and quality. Finally, it notes that states must put in place laws/policies, provide means of recourse, and take deliberate steps to progressively realize rights.
Final outline plan for webinar evaluation and impact assessment mof 2004 EricaPackingtonIOD
This document provides guidance for consultants conducting evaluations and impact assessments of WaterAid's Governance and Transparency Fund (GTF) programme. It outlines the purpose and key stakeholders for the evaluation and impact assessment. Consultants have 25 days to complete both exercises. The evaluation will assess programme performance against objectives, while the impact assessment focuses on understanding changes in people's lives resulting from the programme. Guidance is provided on evaluation questions, methodology, timelines, and the differences between evaluations and impact assessments. Countries will take different approaches depending on whether a full or small-scale evaluation is required.
Managing missing values in routinely reported data: One approach from the Dem...MEASURE Evaluation
This Data for Impact webinar was held in December 2020. Access the recording and learn more at https://www.data4impactproject.org/resources/webinars/managing-missing-values-in-routinely-reported-data-one-approach-from-the-democratic-republic-of-the-congo/
This Data for Impact webinar took place October 29, 2020. Learn more at https://www.data4impactproject.org/resources/webinars/use-of-routine-data-for-economic-evaluations/
Data for Impact hosted a one-hour webinar sharing guidance for using routine data in evaluations. More: https://www.data4impactproject.org/resources/webinars/routine-data-use-in-evaluation-practical-guidance/
Tuberculosis/HIV Mobility Study: Objectives and BackgroundMEASURE Evaluation
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms for those who already suffer from conditions like anxiety and depression.
Data for Impact: Lessons Learned in Using the Ripple Effects Mapping MethodMEASURE Evaluation
The document summarizes experiences using the Ripple Effects Mapping (REM) method to evaluate development programs in Tanzania and Botswana. REM is a participatory method that engages stakeholders to visually map the different effects of a program. The summaries describe:
1) How REM was used to evaluate a governance program in Tanzania, including training facilitators, conducting interviews and group mapping sessions, and analyzing results.
2) Tailoring REM for evaluating a youth program in Botswana, such as adjusting questions for younger participants and capturing complex outcomes.
3) Lessons learned about facilitating REM, including the need for extensive training, tailoring the method to the population, and allowing time for discussion to fully explore outcomes
Development and Validation of a Reproductive Empowerment ScaleMEASURE Evaluation
This document describes a study that developed and validated a Reproductive Empowerment Scale for use in Nigeria. Researchers created items to measure women's agency regarding their reproductive health and tested the scale's psychometric properties. The results supported the scale as a valid and reliable measure of reproductive empowerment for women in Nigeria.
Malaria Data Quality and Use in Selected Centers of Excellence in Madagascar:...MEASURE Evaluation
This document summarizes the results of a cross-sectional baseline survey assessing malaria data quality and use in health centers in Madagascar that were selected as Centers of Excellence to improve data practices. The survey found that while reporting completeness and timeliness were high, data accuracy remained an issue. Baseline performance on data quality indicators was similar between the intervention sites that would implement Centers of Excellence and control sites. The implementation of Centers of Excellence aims to drive improvements in data quality, analysis, and use for decision-making in Madagascar.
Evaluating National Malaria Programs’ Impact in Moderate- and Low-Transmissio...MEASURE Evaluation
The framework highlights the importance of routine surveillance data and confirmed malaria incidence for evaluating national malaria programs in low- and moderate-transmission settings. Process evaluations assess program performance and coverage to determine when impact evaluations are needed. Impact evaluations then measure reductions in malaria burden using methods like interrupted time series and constructed controls while accounting for other factors. Key challenges include defining intervention maturity and coverage thresholds needed to achieve measurable impact. The framework emphasizes continuous evaluation along the implementation and impact pathways to guide program decisions.
Improved Performance of the Malaria Surveillance, Monitoring, and Evaluation ...MEASURE Evaluation
MEASURE Evaluation's support between 2015-2018 likely contributed to significant improvements in Madagascar's malaria surveillance system. Key improvements included: 1) availability of guiding documents, 2) increased completeness and timeliness of facility and community reporting, and 3) establishment of a culture of data dissemination and use. Data quality, reporting rates, and staff capacity all significantly increased over this period according to the assessment. Continued support is needed as Madagascar works towards malaria elimination.
Lessons learned in using process tracing for evaluationMEASURE Evaluation
Access the recording for this Data for Impact (D4I) webinar at https://www.data4impactproject.org/lessons-learned-in-using-process-tracing-for-evaluation/
GDG Cloud Southlake #34: Neatsun Ziv: Automating AppsecJames Anderson
The lecture titled "Automating AppSec" delves into the critical challenges associated with manual application security (AppSec) processes and outlines strategic approaches for incorporating automation to enhance efficiency, accuracy, and scalability. The lecture is structured to highlight the inherent difficulties in traditional AppSec practices, emphasizing the labor-intensive triage of issues, the complexity of identifying responsible owners for security flaws, and the challenges of implementing security checks within CI/CD pipelines. Furthermore, it provides actionable insights on automating these processes to not only mitigate these pains but also to enable a more proactive and scalable security posture within development cycles.
The Pains of Manual AppSec:
This section will explore the time-consuming and error-prone nature of manually triaging security issues, including the difficulty of prioritizing vulnerabilities based on their actual risk to the organization. It will also discuss the challenges in determining ownership for remediation tasks, a process often complicated by cross-functional teams and microservices architectures. Additionally, the inefficiencies of manual checks within CI/CD gates will be examined, highlighting how they can delay deployments and introduce security risks.
Automating CI/CD Gates:
Here, the focus shifts to the automation of security within the CI/CD pipelines. The lecture will cover methods to seamlessly integrate security tools that automatically scan for vulnerabilities as part of the build process, thereby ensuring that security is a core component of the development lifecycle. Strategies for configuring automated gates that can block or flag builds based on the severity of detected issues will be discussed, ensuring that only secure code progresses through the pipeline.
Triaging Issues with Automation:
This segment addresses how automation can be leveraged to intelligently triage and prioritize security issues. It will cover technologies and methodologies for automatically assessing the context and potential impact of vulnerabilities, facilitating quicker and more accurate decision-making. The use of automated alerting and reporting mechanisms to ensure the right stakeholders are informed in a timely manner will also be discussed.
Identifying Ownership Automatically:
Automating the process of identifying who owns the responsibility for fixing specific security issues is critical for efficient remediation. This part of the lecture will explore tools and practices for mapping vulnerabilities to code owners, leveraging version control and project management tools.
Three Tips to Scale the Shift Left Program:
Finally, the lecture will offer three practical tips for organizations looking to scale their Shift Left security programs. These will include recommendations on fostering a security culture within development teams, employing DevSecOps principles to integrate security throughout the development
How to Avoid Learning the Linux-Kernel Memory ModelScyllaDB
The Linux-kernel memory model (LKMM) is a powerful tool for developing highly concurrent Linux-kernel code, but it also has a steep learning curve. Wouldn't it be great to get most of LKMM's benefits without the learning curve?
This talk will describe how to do exactly that by using the standard Linux-kernel APIs (locking, reference counting, RCU) along with a simple rules of thumb, thus gaining most of LKMM's power with less learning. And the full LKMM is always there when you need it!
MYIR Product Brochure - A Global Provider of Embedded SOMs & SolutionsLinda Zhang
This brochure gives introduction of MYIR Electronics company and MYIR's products and services.
MYIR Electronics Limited (MYIR for short), established in 2011, is a global provider of embedded System-On-Modules (SOMs) and
comprehensive solutions based on various architectures such as ARM, FPGA, RISC-V, and AI. We cater to customers' needs for large-scale production, offering customized design, industry-specific application solutions, and one-stop OEM services.
MYIR, recognized as a national high-tech enterprise, is also listed among the "Specialized
and Special new" Enterprises in Shenzhen, China. Our core belief is that "Our success stems from our customers' success" and embraces the philosophy
of "Make Your Idea Real, then My Idea Realizing!"
Fluttercon 2024: Showing that you care about security - OpenSSF Scorecards fo...Chris Swan
Have you noticed the OpenSSF Scorecard badges on the official Dart and Flutter repos? It's Google's way of showing that they care about security. Practices such as pinning dependencies, branch protection, required reviews, continuous integration tests etc. are measured to provide a score and accompanying badge.
You can do the same for your projects, and this presentation will show you how, with an emphasis on the unique challenges that come up when working with Dart and Flutter.
The session will provide a walkthrough of the steps involved in securing a first repository, and then what it takes to repeat that process across an organization with multiple repos. It will also look at the ongoing maintenance involved once scorecards have been implemented, and how aspects of that maintenance can be better automated to minimize toil.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/07/intels-approach-to-operationalizing-ai-in-the-manufacturing-sector-a-presentation-from-intel/
Tara Thimmanaik, AI Systems and Solutions Architect at Intel, presents the “Intel’s Approach to Operationalizing AI in the Manufacturing Sector,” tutorial at the May 2024 Embedded Vision Summit.
AI at the edge is powering a revolution in industrial IoT, from real-time processing and analytics that drive greater efficiency and learning to predictive maintenance. Intel is focused on developing tools and assets to help domain experts operationalize AI-based solutions in their fields of expertise.
In this talk, Thimmanaik explains how Intel’s software platforms simplify labor-intensive data upload, labeling, training, model optimization and retraining tasks. She shows how domain experts can quickly build vision models for a wide range of processes—detecting defective parts on a production line, reducing downtime on the factory floor, automating inventory management and other digitization and automation projects. And she introduces Intel-provided edge computing assets that empower faster localized insights and decisions, improving labor productivity through easy-to-use AI tools that democratize AI.
Quality Patents: Patents That Stand the Test of TimeAurora Consulting
Is your patent a vanity piece of paper for your office wall? Or is it a reliable, defendable, assertable, property right? The difference is often quality.
Is your patent simply a transactional cost and a large pile of legal bills for your startup? Or is it a leverageable asset worthy of attracting precious investment dollars, worth its cost in multiples of valuation? The difference is often quality.
Is your patent application only good enough to get through the examination process? Or has it been crafted to stand the tests of time and varied audiences if you later need to assert that document against an infringer, find yourself litigating with it in an Article 3 Court at the hands of a judge and jury, God forbid, end up having to defend its validity at the PTAB, or even needing to use it to block pirated imports at the International Trade Commission? The difference is often quality.
Quality will be our focus for a good chunk of the remainder of this season. What goes into a quality patent, and where possible, how do you get it without breaking the bank?
** Episode Overview **
In this first episode of our quality series, Kristen Hansen and the panel discuss:
⦿ What do we mean when we say patent quality?
⦿ Why is patent quality important?
⦿ How to balance quality and budget
⦿ The importance of searching, continuations, and draftsperson domain expertise
⦿ Very practical tips, tricks, examples, and Kristen’s Musts for drafting quality applications
https://www.aurorapatents.com/patently-strategic-podcast.html
Quantum Communications Q&A with Gemini LLM. These are based on Shannon's Noisy channel Theorem and offers how the classical theory applies to the quantum world.
Transcript: Details of description part II: Describing images in practice - T...BookNet Canada
This presentation explores the practical application of image description techniques. Familiar guidelines will be demonstrated in practice, and descriptions will be developed “live”! If you have learned a lot about the theory of image description techniques but want to feel more confident putting them into practice, this is the presentation for you. There will be useful, actionable information for everyone, whether you are working with authors, colleagues, alone, or leveraging AI as a collaborator.
Link to presentation recording and slides: https://bnctechforum.ca/sessions/details-of-description-part-ii-describing-images-in-practice/
Presented by BookNet Canada on June 25, 2024, with support from the Department of Canadian Heritage.
The DealBook is our annual overview of the Ukrainian tech investment industry. This edition comprehensively covers the full year 2023 and the first deals of 2024.
UiPath Community Day Kraków: Devs4Devs ConferenceUiPathCommunity
We are honored to launch and host this event for our UiPath Polish Community, with the help of our partners - Proservartner!
We certainly hope we have managed to spike your interest in the subjects to be presented and the incredible networking opportunities at hand, too!
Check out our proposed agenda below 👇👇
08:30 ☕ Welcome coffee (30')
09:00 Opening note/ Intro to UiPath Community (10')
Cristina Vidu, Global Manager, Marketing Community @UiPath
Dawid Kot, Digital Transformation Lead @Proservartner
09:10 Cloud migration - Proservartner & DOVISTA case study (30')
Marcin Drozdowski, Automation CoE Manager @DOVISTA
Pawel Kamiński, RPA developer @DOVISTA
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
09:40 From bottlenecks to breakthroughs: Citizen Development in action (25')
Pawel Poplawski, Director, Improvement and Automation @McCormick & Company
Michał Cieślak, Senior Manager, Automation Programs @McCormick & Company
10:05 Next-level bots: API integration in UiPath Studio (30')
Mikolaj Zielinski, UiPath MVP, Senior Solutions Engineer @Proservartner
10:35 ☕ Coffee Break (15')
10:50 Document Understanding with my RPA Companion (45')
Ewa Gruszka, Enterprise Sales Specialist, AI & ML @UiPath
11:35 Power up your Robots: GenAI and GPT in REFramework (45')
Krzysztof Karaszewski, Global RPA Product Manager
12:20 🍕 Lunch Break (1hr)
13:20 From Concept to Quality: UiPath Test Suite for AI-powered Knowledge Bots (30')
Kamil Miśko, UiPath MVP, Senior RPA Developer @Zurich Insurance
13:50 Communications Mining - focus on AI capabilities (30')
Thomasz Wierzbicki, Business Analyst @Office Samurai
14:20 Polish MVP panel: Insights on MVP award achievements and career profiling
An invited talk given by Mark Billinghurst on Research Directions for Cross Reality Interfaces. This was given on July 2nd 2024 as part of the 2024 Summer School on Cross Reality in Hagenberg, Austria (July 1st - 7th)
Two Examples of Program Planning, Monitoring and Evaluation
1. Two Examples of
Program Planning,
Monitoring and Evaluation
Laili Irani
2. Example 1: Evaluate a Family Planning
Program
Objective:
– To evaluate the impact of a family planning program
in a rural village in West Africa
Main goal:
– To increase contraceptive knowledge, fertility
preferences and contraceptive use
3. Q.1) What is the problem and why does it
exist?
Study recent DHS for evidence on rural area
of interest
– Results show: low contraceptive prevalence,
knowledge and attitudes
See Toolkit for examples of population based surveys
Review data from formative research
– Results shows: low prevalence of contraceptive
use
4. Q.1) What is the problem and why does it
exist? (cont.)
Conduct a needs assessment
– Survey village leaders and community members
– In this example, results show:
• Greater need for modern contraceptive methods
• Easier access to wider range of methods
See Toolkit on how to conduct a needs assessment
5. Q. 2) What interventions can work?
Design an intervention
– A community outreach program in which health
workers visit homes and address the contraceptive
needs of families
Plan a pilot project
– Carry out the project in one neighborhood of the
village
– Expand to the entire village
6. Q. 3) What are we doing?
Develop a logic model
– Inputs outcomes
See Toolkit for description and examples of logic models
Create an M&E plan
– Include a timeline of program activities
See Toolkit for sample outline & program examples
Produce a Performance Monitoring Plan (PMP)
– Share proposed activities with stakeholders and donors
– Identify indicators to be collected and analyzed
See Toolkit for compendium of FP indictors
Engage stakeholders in every step of the program
7. Q. 3) What are we doing? (cont.)
Monitor the various components of the program
– Inputs
• Finances, staff, training materials, contraceptives
and transportation
– Processes
• Health workers trained to work in the community
• Health workers visit community periodically and
distribute contraceptives
• Program officers meet with village leaders often
Ensure quality of program is maintained
8. Q. 4) Are we implementing the program as
planned?
Output monitoring
– Number of first visits made
– Number of follow-up visits
– Types and numbers of contraceptives distributed
Outcome monitoring
– Change in percentage of contraceptive users over
time
See Toolkit for indicator guides for FP programs
9. Q. 5) Are the interventions working /
making a difference?
Outcome evaluation
See Toolkit for examples of evaluation designs and resources
10. Q. 6) Is the program sustainable
and scalable?
Sustainability
– Ensure local government continues the community
outreach program with the aid of health workers
– Build the community’s capacity to encourage voluntary
contraceptive use among families
Scalability
– Expand the program to other villages and regions within
the country
See Toolkit for means to measure sustainability and scalability
11. Next Steps
Share findings with all the stakeholders, including
– Village leaders and community members
– Local government and health department
– Funding agency
– Higher levels of government and health leadership
Disseminate findings widely including through
mass media, research literature and the internet
12. Example 2: Evaluate a Malaria Prevention
Program
Objective:
– To evaluate the impact of a malaria prevention
program in a district in East Africa
Main goal:
– To ensure all pregnant women and children <14 years
are sleeping under insecticide treated nets (ITNs) in
all the villages of a district in East Africa
13. Q.1) What is the problem & why does it exist?
Study recent population based surveys
– DHS, Multiple Indicator Cluster Survey
– Results show: High malaria prevalence and low ITNs
use
Conduct a needs assessment
– Visit selected homes; interview selected community
members
– In this example, results show: Use of ITNs is low due to
lack of knowledge and cost
See Toolkit for various data sources and assessment designs
14. Q. 2) What interventions can work?
Review other programs
Collaborate with more experienced programs
Plan a pilot project
Design an intervention
– Use handheld GPS devices to create clusters
– Visit all homes, identify pregnant women and children
<14 years and provide them with vouchers for ITN
– Program staff visit to ensure ITN installed correctly and
teach villagers how to reapply insecticide
See Toolkit for resources, i.e., Roll Back Malaria website
15. Q. 3) What are we doing?
Develop a logic model
– Inputs outcomes
Develop a PMP
Engage stakeholders in every step of the
program
– Community members and village leaders
– Local government and district officials
– Experts in the field
– Donors and policy makers
See Toolkit for a logic model and draft checklist for developing
PMP plan for malaria program
16. Q. 4) Are we implementing the program as
planned?
Monitor the various components of the
program
– Ensure quality of program is maintained
See Toolkit for:
– Indicator guides for malaria programs
– References on how to conduct routine monitoring
– Impact evaluation references
17. Q. 5) Are the interventions working /
making a difference?
Outcome evaluation
See Toolkit for alternative study designs
18. Q. 6) Is the program sustainable
and scalable?
Sustainability
– Ensure local government continues the voucher program
– Empower community leaders to encourage community
to access ITNs and to use them effectively
Scalability
– Expand the program to other districts and regions within
the country
See Toolkit for descriptions and examples of how programs can be
sustainable and scalable
Now, we will walk through two examples of program planning, monitoring and evaluation. As we present the examples, we will highlight the resources that are available within the toolkit.
The first example involves evaluating the impact of a family planning program in a rural village in West Africa.The goal is to increase knowledge of contraceptives, fertility preferences (preferences for limiting and spacing births, modern contraceptive use and parity) and modern contraceptive use.
We follow the sequence of questions identical to the ones described in the Toolkit. These questions describe the steps that are used when conducting an evaluation plan.For our particular question of interest, if there is a recent demographic and health survey from the country, data from the rural area could show that there is low contraceptive prevalence, knowledge, and attitudes (as well as limited exposure to media and other potential intervention strategies)The Toolkit outlines examples of other potential population based surveys and their strengths and limitations.For our example, Data from a pre-existing formative research would also show that there is low prevalence of contraceptive use.
OR you could conduct a needs assessment in the village of interest by surveying village leaders and community members.In this example, the results would show-A greater need for access to a large range of modern contraceptive methodsThe toolkit describes how to conduct a needs assessment for your question of interest
Design a community outreach program where health workers visit homes and address the contraceptive needs of familiesBefore carrying out the full-fledged intervention, the intervention is carried out in a neighborhood of the village as a pilot project. The lessons learned by getting feedback from the health workers and community members is used to improve upon the project design and is expanded to the entire village.
In order to track our activities, we can develop and use several tools.A logic model will outline all the steps between inputs and outcomes.The toolkit describes logic models and provides examples as well.We can also create an M&E plan that is helpful as it includes a timeline of program activities.The Toolkit also gives a good outline of an M&E plan and also has some program examples.Furthermore, we could produce a PMP which is useful as this document can be shared with donors and other stakeholders to give them an update on proposed activities. It also has a list of indicators that will be collected and analyzed.The toolkit has a great compendium of good FP indicators.In all these steps and processes, it is important to engage stakeholders in every step of the program planning and implementation as well as monitoring and evaluation.
Once our program activities are underway, we monitor the various components of the program.We monitor the use of inputs such as Finances, staff, training materials, contraceptives and transportationSome indicators that can be used to monitor our FP program activities includeThe number of health workers trained to work in the communityThe number of health workers who actually visit the community periodically and distribute contraceptivesThe frequency of meetings between program officers and village leadersMonitoring of inputs and processes ensures that the quality of the program activities are maintained.
In order to determine if we are implementing the program as planned, we collect indicators of output and outcome monitoring.Indicators of output monitoring for our FP program of interest include:Number of first visits madeNumber of follow-up visitsTypes and numbers of contraceptives distributedAn indicator for outcome monitoring is change in percentage of contraceptive users over time.The toolkit has some excellent indicator guides for FP programs.Indicators: fertility preferences (preferences for limiting and spacing births, modern contraceptive use and parity)
In order to determine if the FP intervention we conducted actually made a difference in contraceptive use, we conduct an outcome evaluation. One possible study design we use is the quasi-experimental design where we choose a control village early on. The villagers in this control village have similar sociodemographic characteristics than those in the intervention village. We collect data on contraceptive use, knowledge and attitudes before the intervention and after the intervention from both villages and determine if contraceptive use significantly went up in the intervention village. The toolkit also has examples of other evaluation designs and resources.Impact evaluationShould be conducted a few years after the end of the programNeed to measure attribution of change to the intervention taking into account all other changes/interventions that might have occurred during the same periodCan use existing data sources, such as large surveys
Once the evaluation results show that our FP program was successful, we can make the FP program sustainable by ensuring that the local government continues the community outreach program. With the aid of the same health workers, we can also build the community’s capacity to continue accessing voluntary family planning at local pharmacies and health care centers.Furthermore, the program can be scaled up and expanded to other villages and regions within the country.The toolkit describes how to make programs sustainable and scalable and provides several successful examples as well.
Once the program has been implemented and evaluated, it is important to share the findings and best practices with all the stakeholders that were involved including the village leaders and community members of the village, the local government and health department, the funding agency and other higher levels of government and health leadership. The findings can also be shared with others through mass media, research literature and the internet.
The second example we will discuss today involves the evaluation of a malaria prevention program in a district in East Africa.The main goal of the program is to ensure that all pregnant women and children <14 years are sleeping under ITNs in all the villages within a district in East Africa.
In order to determine the extent of the malaria problem in this district, we review recent population based surveys such as the DHS and MICS. The results will show that there is a high prevalence of malaria and low use of ITNs.We can also conduct a needs assessment by selecting a few homes in a village and interviewing some community members within the selected homes. The results of this needs assessment would show that there is a limited use of ITNs due to lack of knowledge and high cost of ITNs.The toolkit outlines examples of other potential data sources that can be used to identify a health problem within a specified region and also states the strengths and limitations of each of the data sources.
In order to determine what interventions could possibly work in this situation, we review other programs, collaborate with more experienced programs and staff, even maybe plan a pilot project.The toolkit has some great resources to learn about existing successful malaria interventions, such as on the Roll Back Malaria website referenced in our toolkit.In this example, we could design an intervention in our district of interest. With the help of handheld GPS devices, we create clusters of 200 households within the district. We visit all homes, identify pregnant women and children <14 years and provide them with vouchers for ITNs. Then, we have program staff do follow-up visits to ensure that the ITN is installed correctly and teach villagers on how to reapply the insecticide.
Once we have decided on the intervention we will roll out, we develop a logic model that outlines all the steps between inputs and outcomes. We can also develop a PMP to help guide the activities.It is important to engage stakeholders at every step of the program. These stakeholders includeCommunity members and village leadersLocal government and district officialsExperts in the fieldDonors and policy makersThe toolkit describes a logic model and has a checklist of the important components within a PMP plan for malaria program prevention.
In order to determine whether we are implementing the program as planned, we monitor the various components of the program. This also ensure that the quality of the program is maintained.The toolkit has several resources for monitoring malaria program, such as Indicator guides for malaria programs, References on how to conduct routine monitoring as well as impact evaluation.
In order to conduct an outcome evaluation for our malaria program, we can use a time series design. Generally, Time-series designs look for changes over time to determine trends. Evaluators observe the intervention group multiple times before and after the intervention and analyze trends before and after.Impact evaluationCan be conducted a few years after the end of the programNeed to measure attribution of change to the intervention taking into account all other changes/interventions that might have occurred during the same periodCan use existing data sources, such as large surveys
Once the program has been rolled out successfully, it is important to ensure that the program remains sustainable. This can be done by ensuring that the local government continues the voucher program. Also, the community members are empowered to access ITNs and use them effectively.The program can also be scaled up to other districts and regions within the country thus benefiting a larger audience.The toolkit has descriptions and examples of how programs can be sustained and scaled up to new areas.
Thank you for joining us today. We’ve appreciated your participation. We are about to begin a Q&A session for this webinar but I want to let you know about a 3-day online forum that is has begun today. It will1. build capacity in monitoring and evaluation 2. advocate the importance of investing in monitoring and evaluation (M&E)3. and provide a more in-depth overview of MLE’s Measuring Success ToolkitOn your screen, please see the instructions to join the forum. Please join us in the Q&A being moderated by Gretchen.