This presentation discusses the following:
What is an estimate?
What are the factors influencing estimating?
How are agile projects estimated?
How Agile estimation solves common estimation problems?
The document discusses several techniques for estimating the size and complexity of features in agile development projects, including planning poker, decomposition, and using ideal time vs elapsed time. It emphasizes that estimation in agile focuses on relative sizing rather than durations, and that estimates are intentionally vague at first and improve over time based on measuring team velocity. Key goals of iteration planning meetings are to set commitments and arrive at a prioritized backlog for the upcoming sprint.
Planning Poker is a technique used to estimate effort for tasks in Agile software development. It involves each team member privately selecting a planning poker card representing their estimate for a task. The cards have Fibonacci numbers written on them. The cards are then revealed and discussed if estimates differ, until consensus is reached. Once estimates are established, the team's velocity (amount of work completed per sprint) can be used to predict future release dates. Planning Poker works well because it leverages the wisdom of crowds and averages individual estimates for more accurate results.
This document discusses effort estimation in agile projects. It recommends estimating tasks by relative size using story points rather than absolute time values. Planning poker, where a team privately selects effort estimate cards and then discusses them, is advocated as it emphasizes relative estimation and reduces anchoring bias. Velocity, the number of points a team can complete per iteration, is key for planning and adjusting for estimation errors over time. Burn down charts also increase visibility of progress.
Agile Patterns: Agile Estimation
We’re agile, so we don’t have to estimate and have no deadlines, right? Wrong! This session will consist of review of the problem with estimation in projects today and then an overview of the concept of agile estimation and the notion of re-estimation. We’ll learn about user stories, story points, team velocity, how to apply them all to estimation and iterative re-estimation. We will take a look at the cone of uncertainty and how to use it to your advantage. We’ll then take a look at the tools we will use for Agile Estimation, including planning poker, Visual Studio Team System, and much more. This is a very interactive session, so bring a lot of questions!
This slide gives an excellent overview of Agile Planning and Estimation.
Will be really helpful, if presented to a Scrum/Agile Team to understand activities related to Release Planning, Sprint Planning and Estimation
Estimating is hard to get right;
Why is estimating hard to get right?;
Why do we need to estimate;
Agile estimating and planning;
Determine the teams velocity;
Identify features and stories;
Define stories or features;
Planning Poker;
Agile Release Plan;
What if you don’t know the teams velocity?;
Estimating from ideal team structure;
The effect of rework;
Proposals and SOW’s;
My main goal is to share and make you experiment some of the techniques that I use when transforming teams into high-perfoming agile teams, by providing you with four (4) different ways to estimate projects in Agile.
This is a presentation I made in the beginning of this year to explain the basics of agile Estimates. Although the presentation doesn't cover exceptions and some special cases (like in the case of hours estimates) it's a good starting point. A text to understand better the presentation will come on my channel on Medium soon.
User stories are estimated in story points to plan project timelines. Story points are a relative unit used to estimate complexity rather than time. The team estimates stories together by first independently assigning points, then discussing to converge on a shared estimate. Velocity is calculated based on the number of points completed in an iteration to predict future capacity. Pair programming may impact velocity but not the story point estimates themselves. Estimates should consider the story complexity and effort from the team perspective rather than individuals.
An introduction to agile estimation and release planningJames Whitehead
The document provides an introduction to agile estimation and release planning. It discusses building a product backlog by creating user stories, estimating each story using complexity buckets and points, and ensuring user stories meet INVEST criteria. It also covers splitting large user stories, acceptance criteria, and ensuring the product backlog is DEEP by being detailed, estimated, emergent, and prioritized. Estimation techniques include relative sizing to other stories and complexity buckets for different aspects like user interface, business logic, etc. The document emphasizes that estimation is about relative size rather than fixed timelines and that consistency is more important than absolute accuracy.
This document discusses software development methodologies and estimating work. It provides biographical information about the author, including their experience in agile coaching and teaching. It then explores debates around estimating work, noting that estimates are not deadlines and focusing on understanding systems and accepting variability. Various estimation techniques are presented like planning poker, story points and lead time. A real case study example is shared how moving away from estimates to continuous delivery improved outcomes. The document emphasizes that #NoEstimates can work if work is done incrementally and rapidly to deliver value.
We’re agile, so we don’t have to estimate and have no deadlines, right? Wrong! This session will review the problem with estimations in projects today and then give an overview of the concept of agile estimation and the notion of re-estimation. We’ll learn about user stories, story points, team velocity, and how to apply them all to estimation and iterative re-estimation. We will take a look at the cone of uncertainty and how to use it to your advantage. We’ll then take a look at the tools we will use for Agile Estimation, including planning poker, Visual Studio TFS and much more.
The document discusses different approaches to estimation in waterfall and Scrum methodologies. In Scrum, teams estimate their own work in story points, which are relative units based on size and complexity. Story points help drive cross-functional behavior and do not decay over time. Ideal days estimates involve determining how long a task would take with ideal conditions and no interruptions. Planning poker uses story point cards to facilitate discussion and reach consensus on estimates. Release planning in Scrum involves estimating velocity over sprints to determine how many product backlog items can be completed.
Software management...for people who just want to get stuff doneCiff McCollum
This document discusses concepts and techniques for software project management, including planning, estimation, execution, and retrospectives. It covers these concepts at the level of projects, milestones within projects, sprints, and individual stories. Key points emphasized include breaking work into small chunks, using techniques like planning poker and burndown charts, being honest about estimates, and using retrospectives to improve.
These are the slides from the Agile Estimation Workshop I gave at AgileChina 2015. The morning session covered opinion-based techniques. The afternoon covered empirical techniques based on cycle time, Little's Law, and Monte Carlo simulation.
Introduction to Agile Estimation & PlanningAmaad Qureshi
Presented by Natasha Hill & Amaad Qureshi
In this session, we will be covering the techniques of estimating Epics, Features and User Stories on an Agile project and then of creating iteration and release plans from these artefacts.
Agenda
1. Why traditional estimation approaches fail
2. What makes a good Agile Estimating and Planning approach.
3. Story points vs. Ideal Days
4. Estimating product backlog items with Planning Poker
5. Iteration planning - looking ahead and estimating no more than a few week ahead.
6. Release planning - creating a longer term plan, typically looking ahead, 3-6 months
7. Q&A
This document discusses techniques for software project estimation. It recommends providing estimates as ranges rather than specific numbers, and always clarifying what an estimate will be used for. It emphasizes aggregating independent estimates, using past project data to calibrate estimates, and not negotiating estimates or commitments. Key techniques include decomposing work into independently estimable units, using the "law of large numbers" for accuracy, and re-estimating regularly based on actual project velocity. Overall, the document provides guidance for creating estimates that are useful without being overly precise commitments.
Rise and fall of Story Points. Capacity based planning from the trenches.Mikalai Alimenkou
Люди в мире Agile используют Story Points - для Agile коучей и тренеров это самый простой способ объяснить, как следует проводить оценку и планирование в «новом мире». Но тогда эта простая концепция нарушает реальные практические кейсы. В настоящее время команды состоят из очень специализированных людей, работающих над бэкендом, фронтэндом, тестировании, инфраструктуре и прочим. Для них почти невозможно иметь общий уровень сложности. Это только одна из проблем, которые мы собираемся осветить в этом докладе.
Чтобы оставаться конструктивным, а не просто старомодным парнем из XP, Николай поделится своим опытом с более точной и прагматичной техникой оценки/планирования - планированием на основе возможностей.
This document provides an overview of agile stories, estimating, and planning. It discusses what user stories are, how to write them, and techniques for estimating story sizes such as story points. It also covers different levels of planning including release planning, iteration planning, and daily planning. The document is intended to provide background information on using agile methods for requirements management and project planning.
The document describes a method called "Planning Poker" for quickly estimating project tasks through group discussion and iterative voting. A facilitator leads a team in estimating how many chickens are needed for a dinner party for 20 people. Through three rounds of voting and discussion to clarify assumptions, the team converges on an estimate of 13 chickens. Planning Poker aims to leverage collective expertise while avoiding biases from individual experts.
- Story points are an arbitrary measure used by Scrum teams to estimate the effort required to implement a user story. Teams typically use a Fibonacci sequence like 1, 2, 3, 5, 8, 13, 20, 40, 100.
- Estimating user stories allows teams to plan how many highest priority stories can be completed in a sprint and helps forecast release schedules. The whole team estimates during backlog refinement.
- Stories are estimated once they are small enough to fit in a sprint and acceptance criteria are agreed upon. Teams commonly use planning poker where each member privately assigns a story point value and the team discusses until consensus is reached.
Scrum uses relative estimation and velocity to aid in planning and making trade-off decisions. Relative estimation involves comparing the effort of new requirements to previously estimated ones, which humans are better at than absolute estimates. Velocity is the amount of work completed in an iteration, measured in story points or hours, and varies over time so is useful for longer-term planning. There are two types of Scrum planning: fixed-date planning estimates how much can be completed by a date based on velocity, while fixed-scope planning estimates the timeframe to complete all backlog items based on velocity. Both use velocity as a range rather than a precise prediction.
This document discusses agile estimation and planning techniques. It recommends estimating tasks relatively using story points rather than absolute time estimates. Planning poker, where teams privately estimate tasks and then discuss estimates, is presented as an effective technique. Prioritizing a backlog by value, risk, and estimate allows teams to focus on the most important work. Iterative planning within sprints and tracking progress via burn down charts increases transparency.
The document discusses techniques for estimating work in Agile projects using story points and ideal days. It defines story points and ideal days, and explains how to assign estimates relatively by comparing stories rather than using specific units of time. The document also recommends estimating approaches like planning poker, re-estimating as stories change, and using the right units to keep estimates meaningful but relative.
Planning Poker is a consensus-based estimating technique used in agile software development methodologies like Scrum and Extreme Programming (XP). It involves a team estimating task lengths using cards displaying estimates in a Fibonacci sequence. The team discusses their estimates until reaching consensus, with the developer assigned the task having significant input. This engagement aims to create accurate estimates through discussion while avoiding one person influencing others.
Every year, software companies spend a huge amount of time and effort estimating large projects, and still end up regularly missing the mark - often by huge amounts. What the heck is going on? With all of the planning poker, and PI planning, and #noestimates, why isn't this figured out yet?
In this talk, we'll dive into probability theory and psychology to discover some of the common underlying causes for a lack of predictability. Once we understand why the world is so uncertain, we'll talk about how we can live with our estimation failures, while still thrilling our customers and maintaining enough predictability to succeed as an organization.
Effort estimation for software developmentSpyros Ktenas
Software effort estimation has been an important issue for almost everyone in software industry at some point. Below I will try to give some basic details on methods, best practices, common mistakes and available tools.
You may also check a tool implementing methods for estimation at http://effort-estimation.gatory.com/
Spyros Ktenas
http://open-works.org/profiles/spyros-ktenas
Estimating is hard to get right;
Why is estimating hard to get right?;
Why do we need to estimate;
Agile estimating and planning;
Determine the teams velocity;
Identify features and stories;
Define stories or features;
Planning Poker;
Agile Release Plan;
What if you don’t know the teams velocity?;
Estimating from ideal team structure;
The effect of rework;
Proposals and SOW’s;
My main goal is to share and make you experiment some of the techniques that I use when transforming teams into high-perfoming agile teams, by providing you with four (4) different ways to estimate projects in Agile.
This is a presentation I made in the beginning of this year to explain the basics of agile Estimates. Although the presentation doesn't cover exceptions and some special cases (like in the case of hours estimates) it's a good starting point. A text to understand better the presentation will come on my channel on Medium soon.
User stories are estimated in story points to plan project timelines. Story points are a relative unit used to estimate complexity rather than time. The team estimates stories together by first independently assigning points, then discussing to converge on a shared estimate. Velocity is calculated based on the number of points completed in an iteration to predict future capacity. Pair programming may impact velocity but not the story point estimates themselves. Estimates should consider the story complexity and effort from the team perspective rather than individuals.
An introduction to agile estimation and release planningJames Whitehead
The document provides an introduction to agile estimation and release planning. It discusses building a product backlog by creating user stories, estimating each story using complexity buckets and points, and ensuring user stories meet INVEST criteria. It also covers splitting large user stories, acceptance criteria, and ensuring the product backlog is DEEP by being detailed, estimated, emergent, and prioritized. Estimation techniques include relative sizing to other stories and complexity buckets for different aspects like user interface, business logic, etc. The document emphasizes that estimation is about relative size rather than fixed timelines and that consistency is more important than absolute accuracy.
This document discusses software development methodologies and estimating work. It provides biographical information about the author, including their experience in agile coaching and teaching. It then explores debates around estimating work, noting that estimates are not deadlines and focusing on understanding systems and accepting variability. Various estimation techniques are presented like planning poker, story points and lead time. A real case study example is shared how moving away from estimates to continuous delivery improved outcomes. The document emphasizes that #NoEstimates can work if work is done incrementally and rapidly to deliver value.
We’re agile, so we don’t have to estimate and have no deadlines, right? Wrong! This session will review the problem with estimations in projects today and then give an overview of the concept of agile estimation and the notion of re-estimation. We’ll learn about user stories, story points, team velocity, and how to apply them all to estimation and iterative re-estimation. We will take a look at the cone of uncertainty and how to use it to your advantage. We’ll then take a look at the tools we will use for Agile Estimation, including planning poker, Visual Studio TFS and much more.
The document discusses different approaches to estimation in waterfall and Scrum methodologies. In Scrum, teams estimate their own work in story points, which are relative units based on size and complexity. Story points help drive cross-functional behavior and do not decay over time. Ideal days estimates involve determining how long a task would take with ideal conditions and no interruptions. Planning poker uses story point cards to facilitate discussion and reach consensus on estimates. Release planning in Scrum involves estimating velocity over sprints to determine how many product backlog items can be completed.
Software management...for people who just want to get stuff doneCiff McCollum
This document discusses concepts and techniques for software project management, including planning, estimation, execution, and retrospectives. It covers these concepts at the level of projects, milestones within projects, sprints, and individual stories. Key points emphasized include breaking work into small chunks, using techniques like planning poker and burndown charts, being honest about estimates, and using retrospectives to improve.
These are the slides from the Agile Estimation Workshop I gave at AgileChina 2015. The morning session covered opinion-based techniques. The afternoon covered empirical techniques based on cycle time, Little's Law, and Monte Carlo simulation.
Introduction to Agile Estimation & PlanningAmaad Qureshi
Presented by Natasha Hill & Amaad Qureshi
In this session, we will be covering the techniques of estimating Epics, Features and User Stories on an Agile project and then of creating iteration and release plans from these artefacts.
Agenda
1. Why traditional estimation approaches fail
2. What makes a good Agile Estimating and Planning approach.
3. Story points vs. Ideal Days
4. Estimating product backlog items with Planning Poker
5. Iteration planning - looking ahead and estimating no more than a few week ahead.
6. Release planning - creating a longer term plan, typically looking ahead, 3-6 months
7. Q&A
This document discusses techniques for software project estimation. It recommends providing estimates as ranges rather than specific numbers, and always clarifying what an estimate will be used for. It emphasizes aggregating independent estimates, using past project data to calibrate estimates, and not negotiating estimates or commitments. Key techniques include decomposing work into independently estimable units, using the "law of large numbers" for accuracy, and re-estimating regularly based on actual project velocity. Overall, the document provides guidance for creating estimates that are useful without being overly precise commitments.
Rise and fall of Story Points. Capacity based planning from the trenches.Mikalai Alimenkou
Люди в мире Agile используют Story Points - для Agile коучей и тренеров это самый простой способ объяснить, как следует проводить оценку и планирование в «новом мире». Но тогда эта простая концепция нарушает реальные практические кейсы. В настоящее время команды состоят из очень специализированных людей, работающих над бэкендом, фронтэндом, тестировании, инфраструктуре и прочим. Для них почти невозможно иметь общий уровень сложности. Это только одна из проблем, которые мы собираемся осветить в этом докладе.
Чтобы оставаться конструктивным, а не просто старомодным парнем из XP, Николай поделится своим опытом с более точной и прагматичной техникой оценки/планирования - планированием на основе возможностей.
This document provides an overview of agile stories, estimating, and planning. It discusses what user stories are, how to write them, and techniques for estimating story sizes such as story points. It also covers different levels of planning including release planning, iteration planning, and daily planning. The document is intended to provide background information on using agile methods for requirements management and project planning.
The document describes a method called "Planning Poker" for quickly estimating project tasks through group discussion and iterative voting. A facilitator leads a team in estimating how many chickens are needed for a dinner party for 20 people. Through three rounds of voting and discussion to clarify assumptions, the team converges on an estimate of 13 chickens. Planning Poker aims to leverage collective expertise while avoiding biases from individual experts.
- Story points are an arbitrary measure used by Scrum teams to estimate the effort required to implement a user story. Teams typically use a Fibonacci sequence like 1, 2, 3, 5, 8, 13, 20, 40, 100.
- Estimating user stories allows teams to plan how many highest priority stories can be completed in a sprint and helps forecast release schedules. The whole team estimates during backlog refinement.
- Stories are estimated once they are small enough to fit in a sprint and acceptance criteria are agreed upon. Teams commonly use planning poker where each member privately assigns a story point value and the team discusses until consensus is reached.
Scrum uses relative estimation and velocity to aid in planning and making trade-off decisions. Relative estimation involves comparing the effort of new requirements to previously estimated ones, which humans are better at than absolute estimates. Velocity is the amount of work completed in an iteration, measured in story points or hours, and varies over time so is useful for longer-term planning. There are two types of Scrum planning: fixed-date planning estimates how much can be completed by a date based on velocity, while fixed-scope planning estimates the timeframe to complete all backlog items based on velocity. Both use velocity as a range rather than a precise prediction.
This document discusses agile estimation and planning techniques. It recommends estimating tasks relatively using story points rather than absolute time estimates. Planning poker, where teams privately estimate tasks and then discuss estimates, is presented as an effective technique. Prioritizing a backlog by value, risk, and estimate allows teams to focus on the most important work. Iterative planning within sprints and tracking progress via burn down charts increases transparency.
The document discusses techniques for estimating work in Agile projects using story points and ideal days. It defines story points and ideal days, and explains how to assign estimates relatively by comparing stories rather than using specific units of time. The document also recommends estimating approaches like planning poker, re-estimating as stories change, and using the right units to keep estimates meaningful but relative.
Planning Poker is a consensus-based estimating technique used in agile software development methodologies like Scrum and Extreme Programming (XP). It involves a team estimating task lengths using cards displaying estimates in a Fibonacci sequence. The team discusses their estimates until reaching consensus, with the developer assigned the task having significant input. This engagement aims to create accurate estimates through discussion while avoiding one person influencing others.
Every year, software companies spend a huge amount of time and effort estimating large projects, and still end up regularly missing the mark - often by huge amounts. What the heck is going on? With all of the planning poker, and PI planning, and #noestimates, why isn't this figured out yet?
In this talk, we'll dive into probability theory and psychology to discover some of the common underlying causes for a lack of predictability. Once we understand why the world is so uncertain, we'll talk about how we can live with our estimation failures, while still thrilling our customers and maintaining enough predictability to succeed as an organization.
Effort estimation for software developmentSpyros Ktenas
Software effort estimation has been an important issue for almost everyone in software industry at some point. Below I will try to give some basic details on methods, best practices, common mistakes and available tools.
You may also check a tool implementing methods for estimation at http://effort-estimation.gatory.com/
Spyros Ktenas
http://open-works.org/profiles/spyros-ktenas
The document discusses estimation techniques. It presents five estimation laws: 1) Don't estimate if you can measure, 2) compare instead of estimating units, 3) measure things that are measurable, 4) reduce precision of estimates based on knowledge, and 5) use different metrics for different estimates. Good practices discussed include using story sizing for requirements, measuring in hours for small tasks, using velocity, splitting large stories, and measuring fixed cycle times. The document provides resources for further learning about agile estimation techniques.
The document provides tips and techniques for software estimation. It discusses defining estimates, factors that influence accuracy such as probability statements and the cone of uncertainty. The primary purpose of estimation is to determine if targets are realistic rather than perfectly predicting outcomes. Techniques covered include counting elements to estimate, using historical data for calibration, individual expert judgement breaking tasks into appropriate levels of detail, analogy to past projects, and group expert judgement. Accuracy improves with proper technique selection, assumptions documentation, and incorporating lessons learned.
What are the odds of making that number risk analysis with crystal ball - O...p6academy
This document provides an overview of a presentation on risk analysis using Crystal Ball. It introduces the presenter, Eric Torkia, and his background in risk analysis, project feasibility, financial modeling, and organizational change management. It then discusses how Monte Carlo simulation can help quantify risk and uncertainty in estimates to improve decision making for projects and investments by providing a full range of potential outcomes and probabilities. The document provides examples of how simulation can analyze risk in areas like project cost estimating, capital budgeting decisions, and portfolio planning.
The document discusses how software provides value for many industries and the economy. However, projects often experience cost overruns, delays, and failures to deliver value efficiently. The document argues that development costs, schedules, and values are uncertain and best viewed as ranges rather than single estimates. Seeing projects this way allows focusing on continuously improving estimates and reducing risks to better ensure delivery of value over the project lifecycle. Tailoring governance approaches to different project types based on their risks and opportunities for discovery can help optimize value creation.
The document discusses how software provides value for other industries and the challenges of efficiently delivering that value. While estimates of costs and schedules are often inaccurate, focusing only on avoiding overruns limits opportunities. Instead, engineering principles can be applied to create value by treating costs and benefits as random variables, reducing uncertainty over time, and increasing potential upside benefits. Tailoring governance approaches based on risk areas can improve value delivery.
GDG Cloud Southlake #5 Eric Harvieux: Site Reliability Engineering (SRE) in P...James Anderson
Eric Harvieux, an SRE on Google's Customer Reliability Engineering (CRE) team, will talk to us about Site Reliability Engineering (SRE) in Practice, including a panel discussion with Fidelity, Home Depot, Sabre, and Google SRE Practitioners. We hope to hear how real-life SRE is different than the books.
The document discusses key concepts in defining and structuring decision problems. It defines the three components of a problem statement as the current state, desired state, and central objective. Decision trees and influence diagrams are presented as tools to structure choices and uncertainties. Deterministic, stochastic, and simulation models are described based on their mathematical focus. Probability is discussed in terms of frequentist, subjective, and logical interpretations, and methods for forecasting and decomposing complex probabilities are outlined. Calibration and sensitivity analysis are introduced as ways to evaluate probability estimates and assumptions.
Given at Axial HQ for the New York chapter of Venwise, this talk details how Axial approaches building products predictably through a combination of focus, objectives, prioritization and forecasting. We call it stack.
Check out more of what we're building over at: axialcorps.com
Chapter 6 Decision Making The Essence Of The Managers Job Ppt06D
The document discusses decision making and the decision making process for managers. It outlines 8 steps in the decision making process including identifying the problem, criteria, alternatives, analysis, selection, implementation, and evaluation. It also discusses rational decision making and biases managers may exhibit, such as overconfidence and anchoring effects. Finally, it provides guidelines for effective decision making including understanding cultural differences, using an effective process, and embracing complexity.
The document discusses various techniques for agile planning including estimating story sizes using planning poker, estimating velocity based on prior iterations, prioritizing stories based on value, cost, risk and new knowledge, and creating a release plan by selecting stories and estimating a release date based on estimated velocity. It cautions that estimates are not commitments and provides tips for splitting large stories and combining planning at both the release and sprint levels.
This document provides an introduction to SCRUM, an agile framework for developing software. It discusses the core concepts of SCRUM including roles, events, artifacts, and values. The three main roles are the Product Owner, Development Team, and Scrum Master. Key events in the SCRUM process are the Sprint Planning Meeting, Daily Scrum, Sprint Review, and Sprint Retrospective. Main artifacts include the Product Backlog and Sprint Backlog. The document also covers user stories, estimation techniques, and how a typical SCRUM sprint cycle works.
The document discusses how to sell lean and agile development practices to various stakeholders. It covers selling to project managers by highlighting benefits like more structure, faster reporting, and less time spent on non-value adding activities. For sales teams, it stresses focusing on quality and the client experience to eliminate fears. When selling to clients, key points are that agile leads to lower maintenance costs, flexibility, and quality through continuous deployment.
Room to Breathe: The BA's role in project estimationufunctional
What's the Business Analyst's role in project estimation?
According to this presentation, it's "Getting the project through the Hot Zone, with Room to Breathe."
Room to Breathe means enough time for the team and project manager to deal with the remaining uncertainty as it comes up.
The Hot Zone starts when up-front requirements and planning are yielding diminishing returns, the pressure to commit to a plan is mounting, and there's still more than about 25% uncertainty in the estimate.
The difference between a requirements gatherer and a Business Analyst is that a BA provides great decision support, and the estimation problem is at the heart of that.
This document discusses decision analysis and risk management. It covers decision making under certainty, ignorance, and risk. Key concepts include expected monetary value, maximax, maximin, and expected return decision rules. Under certainty, the decision maker knows the state of nature with certainty. Under ignorance, all states are possible but probabilities are unknown. Under risk, probabilities of states are known. Expected monetary value quantifies risks by multiplying probability and impact. Maximax selects the strategy with highest possible return, while maximin selects the strategy with the lowest possible loss. Expected return selects the alternative with the highest expected long-term return based on probabilities of outcomes. The document emphasizes applying decision analysis concepts to project risk management.
The document outlines an 8-step process for effective problem solving: 1) Identify the problem, 2) Understand the current situation, 3) Identify the root causes, 4) Plan improvements, 5) Execute the improvements, 6) Confirm the results, 7) Standardize the improvements, and 8) Plan for the future. Key aspects of the process include using tools like fishbone diagrams, Pareto charts, and goal setting to thoroughly analyze problems and select effective solutions. The process advocates for containing root causes, prioritizing high impact improvements with low effort, monitoring solutions, and documenting standardized practices to maintain results over time.
This document discusses various search features beyond basic matching and ranking, including facets, query auto-completion, spelling correction, and query relaxation. It provides examples of how these features are implemented in Solr to help users formulate queries, understand results, and narrow down search outputs. Specific challenges with facets for e-commerce search are outlined, such as handling product variants and selecting the best facet values. Solutions proposed include indexing one document per variant, using collapse queries, and executing a prior facet request to select meaningful facets. The document also discusses approaches to auto-completion using suggesters and spelling correction using the Solr spellcheck component or alternative methods like fuzzy search over a query index. Finally, query relaxation techniques are briefly covered, such
- Eric Pugh is the co-founder of OpenSource Connections, an Elasticsearch and Solr consultancy.
- OpenSource Connections helps clients improve their search relevance through consultancy, training, and community initiatives like meetups and conferences.
- Many websites have "broken" search relevance due to issues like poor collaboration, difficult testing processes, and slow iterations. OpenSource Connections aims to help clients address these issues through tools like their search dashboard Quepid.
- Improving search relevance is important for better conversion rates, understanding customer intent, and enabling personalization. OpenSource Connections provides strategies and services to help tune clients' search using frameworks, tools like Quepid, and a focus on measurement
Smarter search drives value to your business. Delivering search that matches users to the right content is what you care about. But organizations often get stuck getting there. It turns out that you need quite a number of very different ingredients to deliver tremendous search. It can make your head spin! To help you think through where your team is on its road to smarter search, Pugh introduces the maturity model used by OpenSource Connections and walks you through a very concrete method to inventory needed skills and translate that into search roles for your team. He shows how to measure your capabilities in key areas of search to drive better ROI from search.
The right path to making search relevant - Taxonomy Bootcamp London 2019OpenSource Connections
This document discusses improving search relevance. It notes that search quality has three aspects: relevance, performance, and experience. It emphasizes that improving relevance requires a cross-functional search team that is educated, empowered, and builds skills internally. It also stresses the importance of continuous measurement and refinement through metrics, instrumentation, and open source tools. The overall message is that achieving search relevance is as much a people problem as a technical one.
Payloads have been a powerful aspect of Lucene for a long time, but have only had limited exposure in Solr. The Tika project has only recently finished integrating the powerful Tesseract OCR library, bringing the prospect of OCR to the masses.
Haystack 2019 Lightning Talk - The Future of Quepid - Charlie HullOpenSource Connections
Quepid is a search relevance dashboard and testing tool that is currently available as a $99/month hosted service. The company announced at a conference that Quepid will now be free to use as a hosted service and will soon be released as open source software. They are starting an open source community on GitHub and in a Slack channel to collaborate on the project and help drive broader adoption.
This document discusses Apache Tika, a tool for extracting text and metadata from various file formats. It describes how Tika works and some challenges that may occur such as exceptions, unsupported formats, or memory issues. The document also mentions a tool called tika-eval that profiles Tika runs and exceptions. Future plans for Tika include improved CSV, ZIP file parsing and detection as well as more modularized statistics collection and language identification.
Haystack 2019 Lightning Talk - Relevance on 17 million full text documents - ...OpenSource Connections
HathiTrust is a shared digital repository containing over 17 million scanned books from over 140 member libraries, totaling around 5 billion pages. It faces challenges in providing large-scale full-text search across this multilingual collection where document quality and structure varies. Initial approaches involved a two-tiered index but relevance must balance weights between full text and shorter metadata fields. Further tuning of algorithms like BM25 is needed to properly rank longer documents in the collection against metadata.
This document discusses deploying Solr Cloud on Kubernetes. It notes that Kubernetes provides a universal language for deploying, configuring, and managing applications in the cloud or locally. Using Kubernetes can reduce costs and allow leveraging DevOps and SRE talent. However, deploying stateful applications like Solr Cloud on Kubernetes presents challenges related to managing stateful sets, configurations, persistent volumes, and cluster management. Questions are also raised around multi-zone configurations, pod replacement policies, and whether configurations are specific to certain cloud providers like AWS. Success stories are sought from users already deploying technologies like Zookeeper and Kafka on Kubernetes.
This document introduces Quaerite, a search relevance toolkit for testing search relevance parameters offline. It allows running experiments to test different combinations of tokenizers, filters, scoring models and other parameters to evaluate search relevance without live user queries. The toolkit supports experimenting with all parameter permutations using grid search or random search, and also incorporates a genetic algorithm with cross-fold validation. It currently supports Apache Solr and plans to add support for ElasticSearch. The goal is to help optimize search relevance through offline testing of parameter configurations.
Haystack 2019 - Search-based recommendations at Politico - Ryan KohlOpenSource Connections
Over the past year, the POLITICO team has developed a recommendation system for our users, which recommends not only news content to read but also news topics to subscribe to. This talk will discuss our development path, including dead-ends and performance trade-offs. In the end, the team produced a system based on search technology (in our case, Elasticsearch) and refined by machine learning techniques to achieve a balance between personalization and serendipity.
With the advent of deep learning and algorithms like word2vec and doc2vec, vectors-based representations are increasingly being used in search to represent anything from documents to images and products. However, search engines work with documents made of tokens, and not vectors, and are typically not designed for fast vector matching out of the box. In this talk, I will give an overview of how vectors can be derived from documents to produce a semantic representation of a document that can be used to implement semantic / conceptual search without hurting performance. I will then describe a few different techniques for efficiently searching vector-based representations in an inverted index, including LSH, vector quantization and k-means tree, and compare their performance in terms of speed and relevancy. Finally, I will describe how each technique can be implemented efficiently in a lucene-based search engine such as Solr or Elastic Search.
Haystack 2019 - Natural Language Search with Knowledge Graphs - Trey GraingerOpenSource Connections
To optimally interpret most natural language queries, it is necessary to understand the phrases, entities, commands, and relationships represented or implied within the search. Knowledge graphs serve as useful instantiations of ontologies which can help represent this kind of knowledge within a domain.
In this talk, we'll walk through techniques to build knowledge graphs automatically from your own domain-specific content, how you can update and edit the nodes and relationships, and how you can seamlessly integrate them into your search solution for enhanced query interpretation and semantic search. We'll have some fun with some of the more search-centric use cased of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query "bbq near haystack" into
{ filter:["doc_type":"restaurant"], "query": { "boost": { "b": "recip(geodist(38.034780,-78.486790),1,1000,1000)", "query": "bbq OR barbeque OR barbecue" } } }
We'll also specifically cover use of the Semantic Knowledge Graph, a particularly interesting knowledge graph implementation available within Apache Solr that can be auto-generated from your own domain-specific content and which provides highly-nuanced, contextual interpretation of all of the terms, phrases and entities within your domain. We'll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding within your search engine.
For e-commerce applications, matching users with the items they want is the name of the game. If they can't find what they want then how can they buy anything?! Typically this functionality is provided through search and browse experience. Search allows users to type in text and match against the text of the items in the inventory. Browse allows users to select filters and slice-and-dice the inventory down to the subset they are interested in. But with the shift toward mobile devices, no one wants to type anymore - thus browse is becoming dominant in the e-commerce experience.
But there's a problem! What if your inventory is not categorized? Perhaps your inventory is user generated or generated by external providers who don't tag and categorize the inventory. No categories and no tags means no browse experience and missed sales. You could hire an army of taxonomists and curators to tag items - but training and curation will be expensive. You can demand that your providers tag their items and adhere to your taxonomy - but providers will buck this new requirement unless they see obvious and immediate benefit. Worse, providers might use tags to game the system - artificially placing themselves in the wrong category to drive more sales. Worst of all, creating the right taxonomy is hard. You have to structure a taxonomy to realistically represent how your customers think about the inventory.
Eventbrite is investigating a tantalizing alternative: using a combination of customer interactions and machine learning to automatically tag and categorize our inventory. As customers interact with our platform - as they search for events and click on and purchase events that interest them - we implicitly gather information about how our users think about our inventory. Search text effectively acts like a tag and a click on an event card is a vote for that clicked event is representative of that tag. We are able to use this stream of information as training data for a machine learning classification model; and as we receive new inventory, we can automatically tag it with the text that customers will likely use when searching for it. This makes it possible to better understand our inventory, our supply and demand, and most importantly this allows us to build the browse experience that customers demand.
In this talk I will explain in depth the problem space and Eventbrite's approach in solving the problem. I will describe how we gathered training data from our search and click logs, and how we built and refined the model. I will present the output of the model and discuss both the positive results of our work as well as the work left to be done. Those attending this talk will leave with some new ideas to take back to their own business.
Haystack 2019 - Improving Search Relevance with Numeric Features in Elasticse...OpenSource Connections
Recently Elasticsearch has introduced a number of ways to improve search relevance of your documents based on numeric features. In this talk I will present the newly introduced field types of "rank_feature", "rank_features" ,"dense_field", and "sparse_vector" and discuss in what situations and how they can be used to boost scores of your documents. I will also talk about the inner workings of queries based on these fields, and related performance considerations.
Haystack 2019 - Architectural considerations on search relevancy in the conte...OpenSource Connections
With an increasing amount of relevancy factors, relevancy fine-tuning becomes more complex as changing the impact of factors produces increasingly more unintended side effects. In recent years, there has been a lot of discussion about how learning algorithms can replace manual relevancy fine-tuning in order to manage this complexity. However, discussions about the challenge of relevancy should additionally consider architectural aspects. Especially microservice-based architectures provide many ways to encapsulate and to separate complexities of search solutions, which facilitates optimizing the search as well as locating and fixing problems.
Generally, relevancy factors can be assigned to three different groups, each handled at a different stage of the search request processing. The first group contains contextual factors that depend on certain characteristics of a query, such as query-related boosts lifting up top-sellers for queries or category-related boosts to distinguish products from their accessories. Such contextual factors can be handled as a step of the preprocessing of queries. The respective boosting information can simply be appended to the query before it is actually sent to the search engine. Ideally, the normalization of the query is done beforehand.
The second group contains factors that are considered for all queries in more or less the same way, e. g. a ranking function basing on keyword occurrences, product topicality or sales in total. Factors related to this group can be handled directly by configuring the search engine.
The third group contains situational factors. For instance, a certain product might be a good match for a certain query in general, but for situational circumstances it should not appear among the top five products (e. g. because it is out of stock). Such situational factors can be handled by resorting result sets, after they were returned by the search engine.
The handling of the different factors within successive stages of search request processing will be discussed from an architectural perspective. Implications for applying learning algorithms and the implementation of a personalized search will be considered.
Does your search application include a custom query syntax with various search operators such as Booleans, proximity, term or phrase frequency, capitalization, quoted text or as-is operator, and other advanced operators? Although most search applications offer a natural language-oriented search box, some advanced applications may also offer a custom query syntax for advanced users or automated tasks. The Lucene "classic" query operators that are supported by the Solr edismax query parser (Boolean, phrase with slop, wildcard, etc.) cover a good amount of use cases, but they only get you so far. In this talk, we will explore various strategies to support a custom and advanced query syntax in Solr, covering a spectrum of options from leveraging the out-of-the-box Solr query DSL, to a custom Solr query parser, and hybrid solutions in between. We will identify the options' pros and cons, discuss relevancy considerations, and illustrate the options in Java.
Haystack 2019 - Establishing a relevance focused culture in a large organizat...OpenSource Connections
For a relevance engineer one of the most difficult tasks in the tuning process is to convince others in the organization that this is a joint effort. Even the brightest search guru doesn't get very far when working in isolation, so establishing cross-collaboration through the organization is essential. But how to get there?
On top of that, in a large organization a relevance engineer often works on multiple seemingly unrelated search projects. The challenge is not to get drowned in building custom solutions for each project, but to design generic and re-usable strategies which solve many problems at once.
In this session we'll discuss how to build a widely supported basis for search quality improvements in an organization. It is full of practical tips and examples which could help you in establishing a cross-functional culture that is optimal for relevance tuning. It also zooms in on an holistic approach of solving multiple equivalent search issues at once.
Haystack 2019 - Solving for Satisfaction: Introduction to Click Models - Eliz...OpenSource Connections
Relevance metrics like NDGC or ERR require graded judgements to evaluate query relevance performance. But what happens when we don't know what 'good' looks like ahead of time? This talk will look at using click modeling techniques to infer relevance judgements from user interaction logs.
Graphs & GraphRAG - Essential Ingredients for GenAINeo4j
Knowledge graphs are emerging as useful and often necessary for bringing Enterprise GenAI projects from PoC into production. They make GenAI more dependable, transparent and secure across a wide variety of use cases. They are also helpful in GenAI application development: providing a human-navigable view of relevant knowledge that can be queried and visualised.
This talk will share up-to-date learnings from the evolving field of knowledge graphs; why more & more organisations are using knowledge graphs to achieve GenAI successes; and practical definitions, tools, and tips for getting started.
Presentation Session 2 -Context Grounding.pdfMukesh Kala
This series is your gateway to understanding the WHY, HOW, and WHAT of this revolutionary technology. Over six interesting sessions, we will learn about the amazing power of agentic automation. We will give you the information and skills you need to succeed in this new era.
Making GenAI Work: A structured approach to implementationJeffrey Funk
Richard Self and I present a structured approach to implementing generative AI in your organization, a #technology that sparked the addition of more than ten trillion dollars to market capitalisations of Magnificent Seven (Apple, Amazon, Google, Microsoft, Meta, Tesla, and Nvidia) since January 2023.
Companies must experiment with AI to see if particular use cases can work because AI is not like traditional software that does the same thing over and over again. As Princeton University’s Arvind Narayanan says: “It’s more like creative, but unreliable, interns that must be managed in order to improve processes.”
Testing Tools for Accessibility Enhancement Part II.pptxJulia Undeutsch
Automatic Testing Tools will help you get a first understanding of the accessibility of your website or web application. If you are new to accessibility, it will also help you learn more about the topic and the different issues that are occurring on the web when code is not properly written.
Securely Serving Millions of Boot Artifacts a Day by João Pedro Lima & Matt ...ScyllaDB
Cloudflare’s boot infrastructure dynamically generates and signs boot artifacts for nodes worldwide, ensuring secure, scalable, and customizable deployments. This talk dives into its architecture, scaling decisions, and how it enables seamless testing while maintaining a strong chain of trust.
Building High-Impact Teams Beyond the Product Triad.pdfRafael Burity
The product triad is broken.
Not because of flawed frameworks, but because it rarely works as it should in practice.
When it becomes a battle of roles, it collapses.
It only works with clarity, maturity, and shared responsibility.
techfuturism.com-Autonomous Underwater Vehicles Navigating the Future of Ocea...Usman siddiqui
Imagine a robot diving deep into the ocean, exploring uncharted territories without human intervention. This is the essence of an autonomous underwater vehicle (AUV). These self-operating machines are revolutionizing our understanding of the underwater world, offering insights that were once beyond our reach.
An autonomous underwater vehicle is a type of unmanned underwater vehicle (UUV) designed to operate beneath the water’s surface without direct human control. Unlike remotely operated vehicles (ROVs), which are tethered to a ship and controlled by operators, AUVs navigate the ocean based on pre-programmed instructions or real-time adaptive algorithms.
Dev Dives: Unleash the power of macOS Automation with UiPathUiPathCommunity
Join us on March 27 to be among the first to explore UiPath innovative macOS automation capabilities.
This is a must-attend session for developers eager to unlock the full potential of automation.
📕 This webinar will offer insights on:
How to design, debug, and run automations directly on your Mac using UiPath Studio Web and UiPath Assistant for Mac.
We’ll walk you through local debugging on macOS, working with native UI elements, and integrating with key tools like Excel on Mac.
This is a must-attend session for developers eager to unlock the full potential of automation.
👨🏫 Speakers:
Andrei Oros, Product Management Director @UiPath
SIlviu Tanasie, Senior Product Manager @UiPath
When Platform Engineers meet SREs - The Birth of O11y-as-a-Service SuperpowersEric D. Schabell
Monitoring the behavior of a system is essential to ensuring its long-term effectiveness. However, managing an end-to-end observability stack can feel like stepping into quicksand, without a clear plan you’re risking sinking deeper into system complexities.
In this talk, we’ll explore how combining two worlds—developer platforms and observability—can help tackle the feeling of being off the beaten cloud native path. We’ll discuss how to build paved paths, ensuring that adopting new developer tooling feels as seamless as possible. Further, we’ll show how to avoid getting lost in the sea of telemetry data generated by our systems. Implementing the right strategies and centralizing data on a platform ensures both developers and SREs stay on top of things. Practical examples are used to map out creating your very own Internal Developer Platform (IDP) with observability integrated from day 1.
Fast Screen Recorder v2.1.0.11 Crack Updated [April-2025]jackalen173
Copy This Link and paste in new tab & get Crack File
↓
https://hamzapc.com/ddl
Fast Screen Recorder is an incredibly useful app that will let you record your screen and save a video of everything that happens on it.
UiPath NY AI Series: Session 3: UiPath Autopilot for Everyone with Clipboard AIDianaGray10
🚀 Embracing the Future: UiPath NY AI Series – Session 3: UiPath Autopilot for Everyone with Clipboard AI
📢 Event Overview
This session will provide a deep dive into how UiPath Clipboard AI and Autopilot are reshaping automation, offering attendees a firsthand look at their capabilities, use cases, and real-world benefits. Whether you're a developer, business leader, or automation enthusiast, you'll gain valuable insights into leveraging these AI-driven tools to streamline operations and maximize productivity. 🤖✨
Mastering NIST CSF 2.0 - The New Govern Function.pdfBachir Benyammi
Mastering NIST CSF 2.0 - The New Govern Function
Join us for an insightful webinar on mastering the latest updates to the NIST Cybersecurity Framework (CSF) 2.0, with a special focus on the newly introduced "Govern" function delivered by one of our founding members, Bachir Benyammi, Managing Director at Cyber Practice.
This session will cover key components such as leadership and accountability, policy development, strategic alignment, and continuous monitoring and improvement.
Don't miss this opportunity to enhance your organization's cybersecurity posture and stay ahead of emerging threats.
Secure your spot today and take the first step towards a more resilient cybersecurity strategy!
Event hosted by Sofiane Chafai, ISC2 El Djazair Chapter President
Watch the webinar on our YouTube channel: https://youtu.be/ty0giFH6Qp0
Java on AWS Without the Headaches - Fast Builds, Cheap Deploys, No KubernetesVictorSzoltysek
Java Apps on AWS Without the Headaches: Fast Builds, Cheap Deploys, No Kubernetes
Let’s face it: the cloud has gotten out of hand. What used to be simple—deploying your Java app—has become a maze of slow builds, tedious deploys, and eye-watering AWS bills. But here’s the thing: it doesn’t have to be this way. Every minute you spend waiting on builds or wrestling with unnecessary cloud complexity is a minute you’re not building the features your customers actually care about.
In this talk, I’ll show you how to go from a shiny new Java app to production in under 10 minutes—with fast builds, cheap deploys, and zero downtime. We’ll go deep into optimizing builds with Gradle (it’s time to leave Maven in the dust), parallelization strategies, and smarter caching mechanics that make your CI/CD pipelines fly. From there, we’ll review the dozen+ ways AWS lets you deploy apps and cut through the chaos to find the solutions that work best for lean, fast, cost-effective pipelines. Spoiler: ECS and EKS usually aren’t the answer. Oh, and I’ll even show you how AI tools like AWS Bedrock can help streamline your processes further, so you can automate what should already be automatic.
This talk is for developers fed up with the cost, complexity, and friction of modern cloud setups—or those who long for the simplicity of the Heroku/Beanstalk/PCF days when deploying to the cloud wasn’t a headache. Whether you’re on AWS, Azure, or GCP, you’ll learn actionable, cloud-agnostic tips to build faster, deploy cheaper, and refocus on what matters most: delivering value to your users.
5. A little about me… Senior Consultant, OpenSource Connections in Charlottesville, Virginia Masters in Management of I.T., University of Virginia, McIntire School of Commerce We tweaked our Scrum process to incorporate Range Estimation based on my studies at Uva Please take the Estimation Survey: http://www.surveymonkey.com/s/SWNNYQJ
6. The root of all estimation evil: Single point estimates Chart taken from: Software Estimation , Steve McConnell, Figure 1-1, p6 “ A single-point estimate is usually a target masquerading as an estimate.” -Steve McConnell
7. A realistic estimate distribution Chart taken from: Software Estimation , Steve McConnell, Figure 1-3, p8 “ There is a limit to how well a project can go but no limit to how many problems can occur.” -Steve McConnell Nominal Outcome (50/50 estimate)
8. Reasons we are wrong so often Different information Different methods Psychological Biases The Expert Problem
9. Bias in Estimation Imagine this scenario: “ Can you build me that CMS website in 2 weeks?” How would you respond? What estimate would you give?
10. Bias in Estimation By supplying my own estimate (or desire) in my question, I have anchored your response. This is called “The anchoring or framing trap” “ Because anchors can establish the terms on which a decision will be made, they are often used as a bargaining tactic by savvy negotiators.” From “The Hidden Traps in Decision Making” from Harvard Business Review, 1998, John Hammond, Ralph L. Keeney, and Howard Raiffa
11. You’re not as good as you think “ The Expert Problem” Experts consistently underestimate their margins of error, and discount the reasons they were wrong in the past. Excuses for past mistakes: You were playing a different game Invoke the outlier “ Almost right” defense The Black Swan: The impact of the Highly Improbable , by Nassim Nicholas Taleb, 2007, Chapter 10: The Scandal of Prediction
12. The best protection “ The best protection against all psychological traps – in isolation or in combination – is awareness.” From “The Hidden Traps in Decision Making” from Harvard Business Review, 1998, John Hammond, Ralph L. Keeney, and Howard Raiffa
14. How agile already avoids pitfalls Encourages team airing of estimates Done before assignment of tasks Scrum poker
15. How agile already avoids pitfalls Separates story from time units, more relative Story Points & Velocity Image from: http://leadinganswers.typepad.com/leading_answers/2007/09/agile-exception.html
22. Incorporating range estimation into Scrum Team originally estimated 108 hours Range estimate went from 114-245 hours. Note the single point was a low estimate! They were able to finish original tasks a little early
23. Range estimation in Scrum Poker It’s very simple – just hold two cards instead of one! The same rules apply about creating discussion between low and high estimators, but you might resolve them differently...
24. On the high end Range estimation in Scrum Poker On the low end On the high end The likely discussion: Hey Orange, why do you say “2”? Yellow and Blue both say “5”. Likely Outcome: 3 or 5 Middle of the road
25. Range estimation in Scrum Poker Still middle of the road, but Green recognizes some risk Orange sees this as really easy Blue sees this as more complicated The likely discussion: Orange and Blue need to compare their visions for this task. Likely Outcome: 8-13? Red and Blue no longer agree: Red is confused or sees big risks
32. Why 2/3? Because it is both simple and pessimistic PERT does a similar thing: Expected = [BestCase + (4*MostLikely) + WorstCase] / 6 Source on PERT: Software Estimation , Steve McConnell, p109
34. Using range estimation to communicate risk Size of your range communicates the risk of your task May encourage you to break up tasks, or better define them. Scrum is all about better communication with the customer – so are ranges
35. How long? Um… 2 days 4 days Do you know your fudge factor? You Your Boss Big Boss
36. How long? 2-4 days 2-4 days Ranges help you control your fudge factor You Your Boss Big Boss
37. Another example: Use ranges to better empower your boss or client You Your Boss Big Boss
38. Perfect – Do it! How long? How much for X? GRRR Umm….. You Your Boss Big Boss 2 days Actually … 4 days 4 days later… 2 days * rate Budget Left: 2 days
42. Potential pitfalls of range estimation Really Wide Ranges Not everything can take 2 – 200 hours or you lose all credibility
43. Potential pitfalls of range estimation Bosses who don’t get it You’re going to have to sell them on how your estimates will improve their decision making ability.
44. Potential pitfalls of range estimation Pushed back deadlines Ranges are not an excuse to always miss deadlines. But they do make it less of a surprise, and encourage you to be more cautious.
45. Potential pitfalls of range estimation Is 2/3 the new single-point? Be careful not to just start treating the 2/3 calculated estimate, use the ranges.