This presentation discusses the following:
What is an estimate?
What are the factors influencing estimating?
How are agile projects estimated?
How Agile estimation solves common estimation problems?
The document discusses several techniques for estimating the size and complexity of features in agile development projects, including planning poker, decomposition, and using ideal time vs elapsed time. It emphasizes that estimation in agile focuses on relative sizing rather than durations, and that estimates are intentionally vague at first and improve over time based on measuring team velocity. Key goals of iteration planning meetings are to set commitments and arrive at a prioritized backlog for the upcoming sprint.
Planning Poker is a technique used to estimate effort for tasks in Agile software development. It involves each team member privately selecting a planning poker card representing their estimate for a task. The cards have Fibonacci numbers written on them. The cards are then revealed and discussed if estimates differ, until consensus is reached. Once estimates are established, the team's velocity (amount of work completed per sprint) can be used to predict future release dates. Planning Poker works well because it leverages the wisdom of crowds and averages individual estimates for more accurate results.
This document discusses effort estimation in agile projects. It recommends estimating tasks by relative size using story points rather than absolute time values. Planning poker, where a team privately selects effort estimate cards and then discusses them, is advocated as it emphasizes relative estimation and reduces anchoring bias. Velocity, the number of points a team can complete per iteration, is key for planning and adjusting for estimation errors over time. Burn down charts also increase visibility of progress.
Agile Patterns: Agile Estimation
We’re agile, so we don’t have to estimate and have no deadlines, right? Wrong! This session will consist of review of the problem with estimation in projects today and then an overview of the concept of agile estimation and the notion of re-estimation. We’ll learn about user stories, story points, team velocity, how to apply them all to estimation and iterative re-estimation. We will take a look at the cone of uncertainty and how to use it to your advantage. We’ll then take a look at the tools we will use for Agile Estimation, including planning poker, Visual Studio Team System, and much more. This is a very interactive session, so bring a lot of questions!
This slide gives an excellent overview of Agile Planning and Estimation.
Will be really helpful, if presented to a Scrum/Agile Team to understand activities related to Release Planning, Sprint Planning and Estimation
Estimating is hard to get right;
Why is estimating hard to get right?;
Why do we need to estimate;
Agile estimating and planning;
Determine the teams velocity;
Identify features and stories;
Define stories or features;
Planning Poker;
Agile Release Plan;
What if you don’t know the teams velocity?;
Estimating from ideal team structure;
The effect of rework;
Proposals and SOW’s;
My main goal is to share and make you experiment some of the techniques that I use when transforming teams into high-perfoming agile teams, by providing you with four (4) different ways to estimate projects in Agile.
This is a presentation I made in the beginning of this year to explain the basics of agile Estimates. Although the presentation doesn't cover exceptions and some special cases (like in the case of hours estimates) it's a good starting point. A text to understand better the presentation will come on my channel on Medium soon.
User stories are estimated in story points to plan project timelines. Story points are a relative unit used to estimate complexity rather than time. The team estimates stories together by first independently assigning points, then discussing to converge on a shared estimate. Velocity is calculated based on the number of points completed in an iteration to predict future capacity. Pair programming may impact velocity but not the story point estimates themselves. Estimates should consider the story complexity and effort from the team perspective rather than individuals.
An introduction to agile estimation and release planningJames Whitehead
The document provides an introduction to agile estimation and release planning. It discusses building a product backlog by creating user stories, estimating each story using complexity buckets and points, and ensuring user stories meet INVEST criteria. It also covers splitting large user stories, acceptance criteria, and ensuring the product backlog is DEEP by being detailed, estimated, emergent, and prioritized. Estimation techniques include relative sizing to other stories and complexity buckets for different aspects like user interface, business logic, etc. The document emphasizes that estimation is about relative size rather than fixed timelines and that consistency is more important than absolute accuracy.
This document discusses software development methodologies and estimating work. It provides biographical information about the author, including their experience in agile coaching and teaching. It then explores debates around estimating work, noting that estimates are not deadlines and focusing on understanding systems and accepting variability. Various estimation techniques are presented like planning poker, story points and lead time. A real case study example is shared how moving away from estimates to continuous delivery improved outcomes. The document emphasizes that #NoEstimates can work if work is done incrementally and rapidly to deliver value.
We’re agile, so we don’t have to estimate and have no deadlines, right? Wrong! This session will review the problem with estimations in projects today and then give an overview of the concept of agile estimation and the notion of re-estimation. We’ll learn about user stories, story points, team velocity, and how to apply them all to estimation and iterative re-estimation. We will take a look at the cone of uncertainty and how to use it to your advantage. We’ll then take a look at the tools we will use for Agile Estimation, including planning poker, Visual Studio TFS and much more.
The document discusses different approaches to estimation in waterfall and Scrum methodologies. In Scrum, teams estimate their own work in story points, which are relative units based on size and complexity. Story points help drive cross-functional behavior and do not decay over time. Ideal days estimates involve determining how long a task would take with ideal conditions and no interruptions. Planning poker uses story point cards to facilitate discussion and reach consensus on estimates. Release planning in Scrum involves estimating velocity over sprints to determine how many product backlog items can be completed.
Software management...for people who just want to get stuff doneCiff McCollum
This document discusses concepts and techniques for software project management, including planning, estimation, execution, and retrospectives. It covers these concepts at the level of projects, milestones within projects, sprints, and individual stories. Key points emphasized include breaking work into small chunks, using techniques like planning poker and burndown charts, being honest about estimates, and using retrospectives to improve.
These are the slides from the Agile Estimation Workshop I gave at AgileChina 2015. The morning session covered opinion-based techniques. The afternoon covered empirical techniques based on cycle time, Little's Law, and Monte Carlo simulation.
Introduction to Agile Estimation & PlanningAmaad Qureshi
Presented by Natasha Hill & Amaad Qureshi
In this session, we will be covering the techniques of estimating Epics, Features and User Stories on an Agile project and then of creating iteration and release plans from these artefacts.
Agenda
1. Why traditional estimation approaches fail
2. What makes a good Agile Estimating and Planning approach.
3. Story points vs. Ideal Days
4. Estimating product backlog items with Planning Poker
5. Iteration planning - looking ahead and estimating no more than a few week ahead.
6. Release planning - creating a longer term plan, typically looking ahead, 3-6 months
7. Q&A
This document discusses techniques for software project estimation. It recommends providing estimates as ranges rather than specific numbers, and always clarifying what an estimate will be used for. It emphasizes aggregating independent estimates, using past project data to calibrate estimates, and not negotiating estimates or commitments. Key techniques include decomposing work into independently estimable units, using the "law of large numbers" for accuracy, and re-estimating regularly based on actual project velocity. Overall, the document provides guidance for creating estimates that are useful without being overly precise commitments.
Rise and fall of Story Points. Capacity based planning from the trenches.Mikalai Alimenkou
Люди в мире Agile используют Story Points - для Agile коучей и тренеров это самый простой способ объяснить, как следует проводить оценку и планирование в «новом мире». Но тогда эта простая концепция нарушает реальные практические кейсы. В настоящее время команды состоят из очень специализированных людей, работающих над бэкендом, фронтэндом, тестировании, инфраструктуре и прочим. Для них почти невозможно иметь общий уровень сложности. Это только одна из проблем, которые мы собираемся осветить в этом докладе.
Чтобы оставаться конструктивным, а не просто старомодным парнем из XP, Николай поделится своим опытом с более точной и прагматичной техникой оценки/планирования - планированием на основе возможностей.
This document provides an overview of agile stories, estimating, and planning. It discusses what user stories are, how to write them, and techniques for estimating story sizes such as story points. It also covers different levels of planning including release planning, iteration planning, and daily planning. The document is intended to provide background information on using agile methods for requirements management and project planning.
The document describes a method called "Planning Poker" for quickly estimating project tasks through group discussion and iterative voting. A facilitator leads a team in estimating how many chickens are needed for a dinner party for 20 people. Through three rounds of voting and discussion to clarify assumptions, the team converges on an estimate of 13 chickens. Planning Poker aims to leverage collective expertise while avoiding biases from individual experts.
- Story points are an arbitrary measure used by Scrum teams to estimate the effort required to implement a user story. Teams typically use a Fibonacci sequence like 1, 2, 3, 5, 8, 13, 20, 40, 100.
- Estimating user stories allows teams to plan how many highest priority stories can be completed in a sprint and helps forecast release schedules. The whole team estimates during backlog refinement.
- Stories are estimated once they are small enough to fit in a sprint and acceptance criteria are agreed upon. Teams commonly use planning poker where each member privately assigns a story point value and the team discusses until consensus is reached.
Scrum uses relative estimation and velocity to aid in planning and making trade-off decisions. Relative estimation involves comparing the effort of new requirements to previously estimated ones, which humans are better at than absolute estimates. Velocity is the amount of work completed in an iteration, measured in story points or hours, and varies over time so is useful for longer-term planning. There are two types of Scrum planning: fixed-date planning estimates how much can be completed by a date based on velocity, while fixed-scope planning estimates the timeframe to complete all backlog items based on velocity. Both use velocity as a range rather than a precise prediction.
This document discusses agile estimation and planning techniques. It recommends estimating tasks relatively using story points rather than absolute time estimates. Planning poker, where teams privately estimate tasks and then discuss estimates, is presented as an effective technique. Prioritizing a backlog by value, risk, and estimate allows teams to focus on the most important work. Iterative planning within sprints and tracking progress via burn down charts increases transparency.
The document discusses techniques for estimating work in Agile projects using story points and ideal days. It defines story points and ideal days, and explains how to assign estimates relatively by comparing stories rather than using specific units of time. The document also recommends estimating approaches like planning poker, re-estimating as stories change, and using the right units to keep estimates meaningful but relative.
Planning Poker is a consensus-based estimating technique used in agile software development methodologies like Scrum and Extreme Programming (XP). It involves a team estimating task lengths using cards displaying estimates in a Fibonacci sequence. The team discusses their estimates until reaching consensus, with the developer assigned the task having significant input. This engagement aims to create accurate estimates through discussion while avoiding one person influencing others.
The document discusses various techniques for estimating work in Agile projects, including story points and feature points. It explains that story points are used to estimate user stories and provide a relative measure of complexity, while feature points are used to estimate larger features. The document also describes planning poker, where teams discuss estimates and converge on a shared value through discussion. Finally, it notes that estimates may need adjusting over time based on team experience and environment factors.
Presentation on Agile Estimation from Agile:MK user group on 7 October 2019 by Rien Sach and John Donoghue https://www.meetup.com/Agile-MK-Agile-User-Group/events/262193686/
Estimating is hard to get right;
Why is estimating hard to get right?;
Why do we need to estimate;
Agile estimating and planning;
Determine the teams velocity;
Identify features and stories;
Define stories or features;
Planning Poker;
Agile Release Plan;
What if you don’t know the teams velocity?;
Estimating from ideal team structure;
The effect of rework;
Proposals and SOW’s;
My main goal is to share and make you experiment some of the techniques that I use when transforming teams into high-perfoming agile teams, by providing you with four (4) different ways to estimate projects in Agile.
This is a presentation I made in the beginning of this year to explain the basics of agile Estimates. Although the presentation doesn't cover exceptions and some special cases (like in the case of hours estimates) it's a good starting point. A text to understand better the presentation will come on my channel on Medium soon.
User stories are estimated in story points to plan project timelines. Story points are a relative unit used to estimate complexity rather than time. The team estimates stories together by first independently assigning points, then discussing to converge on a shared estimate. Velocity is calculated based on the number of points completed in an iteration to predict future capacity. Pair programming may impact velocity but not the story point estimates themselves. Estimates should consider the story complexity and effort from the team perspective rather than individuals.
An introduction to agile estimation and release planningJames Whitehead
The document provides an introduction to agile estimation and release planning. It discusses building a product backlog by creating user stories, estimating each story using complexity buckets and points, and ensuring user stories meet INVEST criteria. It also covers splitting large user stories, acceptance criteria, and ensuring the product backlog is DEEP by being detailed, estimated, emergent, and prioritized. Estimation techniques include relative sizing to other stories and complexity buckets for different aspects like user interface, business logic, etc. The document emphasizes that estimation is about relative size rather than fixed timelines and that consistency is more important than absolute accuracy.
This document discusses software development methodologies and estimating work. It provides biographical information about the author, including their experience in agile coaching and teaching. It then explores debates around estimating work, noting that estimates are not deadlines and focusing on understanding systems and accepting variability. Various estimation techniques are presented like planning poker, story points and lead time. A real case study example is shared how moving away from estimates to continuous delivery improved outcomes. The document emphasizes that #NoEstimates can work if work is done incrementally and rapidly to deliver value.
We’re agile, so we don’t have to estimate and have no deadlines, right? Wrong! This session will review the problem with estimations in projects today and then give an overview of the concept of agile estimation and the notion of re-estimation. We’ll learn about user stories, story points, team velocity, and how to apply them all to estimation and iterative re-estimation. We will take a look at the cone of uncertainty and how to use it to your advantage. We’ll then take a look at the tools we will use for Agile Estimation, including planning poker, Visual Studio TFS and much more.
The document discusses different approaches to estimation in waterfall and Scrum methodologies. In Scrum, teams estimate their own work in story points, which are relative units based on size and complexity. Story points help drive cross-functional behavior and do not decay over time. Ideal days estimates involve determining how long a task would take with ideal conditions and no interruptions. Planning poker uses story point cards to facilitate discussion and reach consensus on estimates. Release planning in Scrum involves estimating velocity over sprints to determine how many product backlog items can be completed.
Software management...for people who just want to get stuff doneCiff McCollum
This document discusses concepts and techniques for software project management, including planning, estimation, execution, and retrospectives. It covers these concepts at the level of projects, milestones within projects, sprints, and individual stories. Key points emphasized include breaking work into small chunks, using techniques like planning poker and burndown charts, being honest about estimates, and using retrospectives to improve.
These are the slides from the Agile Estimation Workshop I gave at AgileChina 2015. The morning session covered opinion-based techniques. The afternoon covered empirical techniques based on cycle time, Little's Law, and Monte Carlo simulation.
Introduction to Agile Estimation & PlanningAmaad Qureshi
Presented by Natasha Hill & Amaad Qureshi
In this session, we will be covering the techniques of estimating Epics, Features and User Stories on an Agile project and then of creating iteration and release plans from these artefacts.
Agenda
1. Why traditional estimation approaches fail
2. What makes a good Agile Estimating and Planning approach.
3. Story points vs. Ideal Days
4. Estimating product backlog items with Planning Poker
5. Iteration planning - looking ahead and estimating no more than a few week ahead.
6. Release planning - creating a longer term plan, typically looking ahead, 3-6 months
7. Q&A
This document discusses techniques for software project estimation. It recommends providing estimates as ranges rather than specific numbers, and always clarifying what an estimate will be used for. It emphasizes aggregating independent estimates, using past project data to calibrate estimates, and not negotiating estimates or commitments. Key techniques include decomposing work into independently estimable units, using the "law of large numbers" for accuracy, and re-estimating regularly based on actual project velocity. Overall, the document provides guidance for creating estimates that are useful without being overly precise commitments.
Rise and fall of Story Points. Capacity based planning from the trenches.Mikalai Alimenkou
Люди в мире Agile используют Story Points - для Agile коучей и тренеров это самый простой способ объяснить, как следует проводить оценку и планирование в «новом мире». Но тогда эта простая концепция нарушает реальные практические кейсы. В настоящее время команды состоят из очень специализированных людей, работающих над бэкендом, фронтэндом, тестировании, инфраструктуре и прочим. Для них почти невозможно иметь общий уровень сложности. Это только одна из проблем, которые мы собираемся осветить в этом докладе.
Чтобы оставаться конструктивным, а не просто старомодным парнем из XP, Николай поделится своим опытом с более точной и прагматичной техникой оценки/планирования - планированием на основе возможностей.
This document provides an overview of agile stories, estimating, and planning. It discusses what user stories are, how to write them, and techniques for estimating story sizes such as story points. It also covers different levels of planning including release planning, iteration planning, and daily planning. The document is intended to provide background information on using agile methods for requirements management and project planning.
The document describes a method called "Planning Poker" for quickly estimating project tasks through group discussion and iterative voting. A facilitator leads a team in estimating how many chickens are needed for a dinner party for 20 people. Through three rounds of voting and discussion to clarify assumptions, the team converges on an estimate of 13 chickens. Planning Poker aims to leverage collective expertise while avoiding biases from individual experts.
- Story points are an arbitrary measure used by Scrum teams to estimate the effort required to implement a user story. Teams typically use a Fibonacci sequence like 1, 2, 3, 5, 8, 13, 20, 40, 100.
- Estimating user stories allows teams to plan how many highest priority stories can be completed in a sprint and helps forecast release schedules. The whole team estimates during backlog refinement.
- Stories are estimated once they are small enough to fit in a sprint and acceptance criteria are agreed upon. Teams commonly use planning poker where each member privately assigns a story point value and the team discusses until consensus is reached.
Scrum uses relative estimation and velocity to aid in planning and making trade-off decisions. Relative estimation involves comparing the effort of new requirements to previously estimated ones, which humans are better at than absolute estimates. Velocity is the amount of work completed in an iteration, measured in story points or hours, and varies over time so is useful for longer-term planning. There are two types of Scrum planning: fixed-date planning estimates how much can be completed by a date based on velocity, while fixed-scope planning estimates the timeframe to complete all backlog items based on velocity. Both use velocity as a range rather than a precise prediction.
This document discusses agile estimation and planning techniques. It recommends estimating tasks relatively using story points rather than absolute time estimates. Planning poker, where teams privately estimate tasks and then discuss estimates, is presented as an effective technique. Prioritizing a backlog by value, risk, and estimate allows teams to focus on the most important work. Iterative planning within sprints and tracking progress via burn down charts increases transparency.
The document discusses techniques for estimating work in Agile projects using story points and ideal days. It defines story points and ideal days, and explains how to assign estimates relatively by comparing stories rather than using specific units of time. The document also recommends estimating approaches like planning poker, re-estimating as stories change, and using the right units to keep estimates meaningful but relative.
Planning Poker is a consensus-based estimating technique used in agile software development methodologies like Scrum and Extreme Programming (XP). It involves a team estimating task lengths using cards displaying estimates in a Fibonacci sequence. The team discusses their estimates until reaching consensus, with the developer assigned the task having significant input. This engagement aims to create accurate estimates through discussion while avoiding one person influencing others.
The document discusses various techniques for estimating work in Agile projects, including story points and feature points. It explains that story points are used to estimate user stories and provide a relative measure of complexity, while feature points are used to estimate larger features. The document also describes planning poker, where teams discuss estimates and converge on a shared value through discussion. Finally, it notes that estimates may need adjusting over time based on team experience and environment factors.
Presentation on Agile Estimation from Agile:MK user group on 7 October 2019 by Rien Sach and John Donoghue https://www.meetup.com/Agile-MK-Agile-User-Group/events/262193686/
Estimates are not promises
Your gut lies
Premature estimation is sabotage
Big teams are slower than small ones
Beware unwarranted precision
Count all the things!
When in a pinch, use a proxy
You can’t negotiate math
This document provides an overview of software project estimation. It defines what an estimate is, discusses characteristics of good estimates, and common sources of estimation inaccuracy. An estimate is a preliminary calculation, not an exact target or commitment. Good estimates have uncertainty ranges and probabilities attached. Estimates become more accurate as a project progresses and uncertainty decreases. Common causes of inaccurate estimates include unstable requirements, missing tasks, optimism bias, and subjective guessing. The document is part one of a two-part overview on software estimation best practices.
This document provides guidance on improving estimates. It discusses expanding one's comfort zone to better understand related processes and people. Common estimation methods are outlined, including analogy, expert judgment, and task breakdown. The document emphasizes the importance of holistic, continuous estimation that considers risks, assumptions, and dependencies. It advises committing to estimates only when requirements are clear and risks are addressed, and avoiding arbitrary padding or unrealistic deadlines. Signs of poor estimates, like unreasonable assumptions or lack of deliverable definition, are identified as "estimate smells" to avoid.
The document discusses using feature points for agile release planning. It defines feature points and how they can be used to estimate user stories, features, and epics at different levels of a project. The key points are: feature points provide relative estimates independent of time units; epics are estimated by POs and architects, features by team leads, and stories by scrum teams; velocity is tracked in feature points to predict sprint and release completion; and principles for agile estimation emphasize basing estimates on facts, estimating often and small chunks, and communicating assumptions.
The document discusses the #NoEstimates movement in software development, which explores alternatives to traditional estimation practices. It notes that estimates often do not directly add value and the movement aims to reduce reliance on estimates or stop using them where possible. Key ideas include using story points instead of hours, limiting story sizes, and building cumulative flow diagrams to make decisions without estimates. The goal is to improve workflows so that estimates become unnecessary.
Estimating the size and effort required for projects and tasks is important for planning purposes but difficult to do with precision. Estimates are informed guesses that can vary due to factors like unclear requirements, lack of historical data, and scope changes. While estimates are not perfect, they provide value by enabling prioritization, collaboration, and iterative planning. Effective estimation techniques include using ranges rather than single points, factoring in assumptions, combining expert judgement with data-driven methods, and refining estimates over time as understanding improves.
Every year, software companies spend a huge amount of time and effort estimating large projects, and still end up regularly missing the mark - often by huge amounts. What the heck is going on? With all of the planning poker, and PI planning, and #noestimates, why isn't this figured out yet?
In this talk, we'll dive into probability theory and psychology to discover some of the common underlying causes for a lack of predictability. Once we understand why the world is so uncertain, we'll talk about how we can live with our estimation failures, while still thrilling our customers and maintaining enough predictability to succeed as an organization.
Estimates or #NoEstimates by Enes PelkoBosnia Agile
Do we need estimates? Are the estimates abused so much that they became unusable? There is a new emerging movement behind #NoEstimates that thinks so. But is it for anyone and in any situation?
This document discusses strategies for estimating software development project delivery. It will cover traditional and Agile techniques for estimation, including examining the purpose of estimates, differences between estimates and guarantees, and how estimation works in Scrum and Kanban environments. Attendees will learn about estimation strategies as a project manager or developer working with business partners.
A Practical Guide to Answering This Question in an Agile Project. Software project estimation is hard. But our stakeholders need answers. In this presentation we seek to give our stakeholders the information that they need.
The document discusses issues with estimation in software projects. It notes that traditional estimation approaches fail because they ignore uncertainty and complexity. While Agile aims to help with lighter estimation practices, there is still risk of falling into the same traps as traditional methods. The key problems are how estimates are used, with unrealistic targets, imposed deadlines, and lack of respect causing issues. Respecting uncertainty and using estimates appropriately is emphasized as important.
Maximising Capital Investments - is guesswork eroding your bottomline?Michael McKeon
Globally, organisations waste US$122 million for every US$1 billion invested due to poor project performance. Daniel Galorath, the world’s leading expert in project estimation, explains why - and how to create better outcomes.
You’re an expert developer, peacefully composing code into a profoundly elegant masterpiece, when suddenly your boss rushes in with the Next Big Idea that will Revolutionize The Way People Use The Internet. He’s on his way to pitch to a VC, and stops by to describe the Idea in excited terms. After a 30 second elevator pitch, he pops the question: “So, Peter, how long do you think it will take to build this thing-a-ma-bob?”
What do you say?
These eight Protips will cover your back, save your job, and keep your boss’s shirt.
significance_of_test_estimating_in_the_software_development.pdfsarah david
Accurate estimations helps project managers to maintain a well-organized project timeline. By having a clear understanding of the time required for testing activities, realistic schedules can be developed, ensuring effective coordination with development and other project tasks.
The document discusses estimation techniques. It presents five estimation laws: 1) Don't estimate if you can measure, 2) compare instead of estimating units, 3) measure things that are measurable, 4) reduce precision of estimates based on knowledge, and 5) use different metrics for different estimates. Good practices discussed include using story sizing for requirements, measuring in hours for small tasks, using velocity, splitting large stories, and measuring fixed cycle times. The document provides resources for further learning about agile estimation techniques.
Software estimation is something that almost all developers working with Scrum have to do once every sprint, but do they really know what a story point means? This presentation introduces some concepts and techniques of software estimation. It is a start point for developers to figure out what could work better in their estimation sessions.
This document provides information about agile estimation techniques, including story points and planning poker. It discusses how story points are used to provide relative estimates of complexity rather than time estimates. Planning poker is described as a consensus-based technique where a team privately selects story point cards before discussing to reach agreement. The document also covers insights around how additional details don't necessarily lead to better estimates and how past sprint performance can inform long-term planning estimates. Common questions about estimation techniques are addressed.
Dark Art of Software Estimation 360iDev2014Carl Brown
The document discusses best practices for creating accurate software project estimates. It recommends estimating at the task level by breaking projects down into granular tasks. Thorough planning is important to generate reliable estimates. Other factors like team familiarity, task independence, and certainty of details can impact estimate quality. The document emphasizes that estimates are predictions and cannot predict the future with certainty.
This document discusses various search features beyond basic matching and ranking, including facets, query auto-completion, spelling correction, and query relaxation. It provides examples of how these features are implemented in Solr to help users formulate queries, understand results, and narrow down search outputs. Specific challenges with facets for e-commerce search are outlined, such as handling product variants and selecting the best facet values. Solutions proposed include indexing one document per variant, using collapse queries, and executing a prior facet request to select meaningful facets. The document also discusses approaches to auto-completion using suggesters and spelling correction using the Solr spellcheck component or alternative methods like fuzzy search over a query index. Finally, query relaxation techniques are briefly covered, such
- Eric Pugh is the co-founder of OpenSource Connections, an Elasticsearch and Solr consultancy.
- OpenSource Connections helps clients improve their search relevance through consultancy, training, and community initiatives like meetups and conferences.
- Many websites have "broken" search relevance due to issues like poor collaboration, difficult testing processes, and slow iterations. OpenSource Connections aims to help clients address these issues through tools like their search dashboard Quepid.
- Improving search relevance is important for better conversion rates, understanding customer intent, and enabling personalization. OpenSource Connections provides strategies and services to help tune clients' search using frameworks, tools like Quepid, and a focus on measurement
Smarter search drives value to your business. Delivering search that matches users to the right content is what you care about. But organizations often get stuck getting there. It turns out that you need quite a number of very different ingredients to deliver tremendous search. It can make your head spin! To help you think through where your team is on its road to smarter search, Pugh introduces the maturity model used by OpenSource Connections and walks you through a very concrete method to inventory needed skills and translate that into search roles for your team. He shows how to measure your capabilities in key areas of search to drive better ROI from search.
The right path to making search relevant - Taxonomy Bootcamp London 2019OpenSource Connections
This document discusses improving search relevance. It notes that search quality has three aspects: relevance, performance, and experience. It emphasizes that improving relevance requires a cross-functional search team that is educated, empowered, and builds skills internally. It also stresses the importance of continuous measurement and refinement through metrics, instrumentation, and open source tools. The overall message is that achieving search relevance is as much a people problem as a technical one.
Payloads have been a powerful aspect of Lucene for a long time, but have only had limited exposure in Solr. The Tika project has only recently finished integrating the powerful Tesseract OCR library, bringing the prospect of OCR to the masses.
Haystack 2019 Lightning Talk - The Future of Quepid - Charlie HullOpenSource Connections
Quepid is a search relevance dashboard and testing tool that is currently available as a $99/month hosted service. The company announced at a conference that Quepid will now be free to use as a hosted service and will soon be released as open source software. They are starting an open source community on GitHub and in a Slack channel to collaborate on the project and help drive broader adoption.
This document discusses Apache Tika, a tool for extracting text and metadata from various file formats. It describes how Tika works and some challenges that may occur such as exceptions, unsupported formats, or memory issues. The document also mentions a tool called tika-eval that profiles Tika runs and exceptions. Future plans for Tika include improved CSV, ZIP file parsing and detection as well as more modularized statistics collection and language identification.
Haystack 2019 Lightning Talk - Relevance on 17 million full text documents - ...OpenSource Connections
HathiTrust is a shared digital repository containing over 17 million scanned books from over 140 member libraries, totaling around 5 billion pages. It faces challenges in providing large-scale full-text search across this multilingual collection where document quality and structure varies. Initial approaches involved a two-tiered index but relevance must balance weights between full text and shorter metadata fields. Further tuning of algorithms like BM25 is needed to properly rank longer documents in the collection against metadata.
This document discusses deploying Solr Cloud on Kubernetes. It notes that Kubernetes provides a universal language for deploying, configuring, and managing applications in the cloud or locally. Using Kubernetes can reduce costs and allow leveraging DevOps and SRE talent. However, deploying stateful applications like Solr Cloud on Kubernetes presents challenges related to managing stateful sets, configurations, persistent volumes, and cluster management. Questions are also raised around multi-zone configurations, pod replacement policies, and whether configurations are specific to certain cloud providers like AWS. Success stories are sought from users already deploying technologies like Zookeeper and Kafka on Kubernetes.
This document introduces Quaerite, a search relevance toolkit for testing search relevance parameters offline. It allows running experiments to test different combinations of tokenizers, filters, scoring models and other parameters to evaluate search relevance without live user queries. The toolkit supports experimenting with all parameter permutations using grid search or random search, and also incorporates a genetic algorithm with cross-fold validation. It currently supports Apache Solr and plans to add support for ElasticSearch. The goal is to help optimize search relevance through offline testing of parameter configurations.
Haystack 2019 - Search-based recommendations at Politico - Ryan KohlOpenSource Connections
Over the past year, the POLITICO team has developed a recommendation system for our users, which recommends not only news content to read but also news topics to subscribe to. This talk will discuss our development path, including dead-ends and performance trade-offs. In the end, the team produced a system based on search technology (in our case, Elasticsearch) and refined by machine learning techniques to achieve a balance between personalization and serendipity.
With the advent of deep learning and algorithms like word2vec and doc2vec, vectors-based representations are increasingly being used in search to represent anything from documents to images and products. However, search engines work with documents made of tokens, and not vectors, and are typically not designed for fast vector matching out of the box. In this talk, I will give an overview of how vectors can be derived from documents to produce a semantic representation of a document that can be used to implement semantic / conceptual search without hurting performance. I will then describe a few different techniques for efficiently searching vector-based representations in an inverted index, including LSH, vector quantization and k-means tree, and compare their performance in terms of speed and relevancy. Finally, I will describe how each technique can be implemented efficiently in a lucene-based search engine such as Solr or Elastic Search.
Haystack 2019 - Natural Language Search with Knowledge Graphs - Trey GraingerOpenSource Connections
To optimally interpret most natural language queries, it is necessary to understand the phrases, entities, commands, and relationships represented or implied within the search. Knowledge graphs serve as useful instantiations of ontologies which can help represent this kind of knowledge within a domain.
In this talk, we'll walk through techniques to build knowledge graphs automatically from your own domain-specific content, how you can update and edit the nodes and relationships, and how you can seamlessly integrate them into your search solution for enhanced query interpretation and semantic search. We'll have some fun with some of the more search-centric use cased of knowledge graphs, such as entity extraction, query expansion, disambiguation, and pattern identification within our queries: for example, transforming the query "bbq near haystack" into
{ filter:["doc_type":"restaurant"], "query": { "boost": { "b": "recip(geodist(38.034780,-78.486790),1,1000,1000)", "query": "bbq OR barbeque OR barbecue" } } }
We'll also specifically cover use of the Semantic Knowledge Graph, a particularly interesting knowledge graph implementation available within Apache Solr that can be auto-generated from your own domain-specific content and which provides highly-nuanced, contextual interpretation of all of the terms, phrases and entities within your domain. We'll see a live demo with real world data demonstrating how you can build and apply your own knowledge graphs to power much more relevant query understanding within your search engine.
For e-commerce applications, matching users with the items they want is the name of the game. If they can't find what they want then how can they buy anything?! Typically this functionality is provided through search and browse experience. Search allows users to type in text and match against the text of the items in the inventory. Browse allows users to select filters and slice-and-dice the inventory down to the subset they are interested in. But with the shift toward mobile devices, no one wants to type anymore - thus browse is becoming dominant in the e-commerce experience.
But there's a problem! What if your inventory is not categorized? Perhaps your inventory is user generated or generated by external providers who don't tag and categorize the inventory. No categories and no tags means no browse experience and missed sales. You could hire an army of taxonomists and curators to tag items - but training and curation will be expensive. You can demand that your providers tag their items and adhere to your taxonomy - but providers will buck this new requirement unless they see obvious and immediate benefit. Worse, providers might use tags to game the system - artificially placing themselves in the wrong category to drive more sales. Worst of all, creating the right taxonomy is hard. You have to structure a taxonomy to realistically represent how your customers think about the inventory.
Eventbrite is investigating a tantalizing alternative: using a combination of customer interactions and machine learning to automatically tag and categorize our inventory. As customers interact with our platform - as they search for events and click on and purchase events that interest them - we implicitly gather information about how our users think about our inventory. Search text effectively acts like a tag and a click on an event card is a vote for that clicked event is representative of that tag. We are able to use this stream of information as training data for a machine learning classification model; and as we receive new inventory, we can automatically tag it with the text that customers will likely use when searching for it. This makes it possible to better understand our inventory, our supply and demand, and most importantly this allows us to build the browse experience that customers demand.
In this talk I will explain in depth the problem space and Eventbrite's approach in solving the problem. I will describe how we gathered training data from our search and click logs, and how we built and refined the model. I will present the output of the model and discuss both the positive results of our work as well as the work left to be done. Those attending this talk will leave with some new ideas to take back to their own business.
Haystack 2019 - Improving Search Relevance with Numeric Features in Elasticse...OpenSource Connections
Recently Elasticsearch has introduced a number of ways to improve search relevance of your documents based on numeric features. In this talk I will present the newly introduced field types of "rank_feature", "rank_features" ,"dense_field", and "sparse_vector" and discuss in what situations and how they can be used to boost scores of your documents. I will also talk about the inner workings of queries based on these fields, and related performance considerations.
Haystack 2019 - Architectural considerations on search relevancy in the conte...OpenSource Connections
With an increasing amount of relevancy factors, relevancy fine-tuning becomes more complex as changing the impact of factors produces increasingly more unintended side effects. In recent years, there has been a lot of discussion about how learning algorithms can replace manual relevancy fine-tuning in order to manage this complexity. However, discussions about the challenge of relevancy should additionally consider architectural aspects. Especially microservice-based architectures provide many ways to encapsulate and to separate complexities of search solutions, which facilitates optimizing the search as well as locating and fixing problems.
Generally, relevancy factors can be assigned to three different groups, each handled at a different stage of the search request processing. The first group contains contextual factors that depend on certain characteristics of a query, such as query-related boosts lifting up top-sellers for queries or category-related boosts to distinguish products from their accessories. Such contextual factors can be handled as a step of the preprocessing of queries. The respective boosting information can simply be appended to the query before it is actually sent to the search engine. Ideally, the normalization of the query is done beforehand.
The second group contains factors that are considered for all queries in more or less the same way, e. g. a ranking function basing on keyword occurrences, product topicality or sales in total. Factors related to this group can be handled directly by configuring the search engine.
The third group contains situational factors. For instance, a certain product might be a good match for a certain query in general, but for situational circumstances it should not appear among the top five products (e. g. because it is out of stock). Such situational factors can be handled by resorting result sets, after they were returned by the search engine.
The handling of the different factors within successive stages of search request processing will be discussed from an architectural perspective. Implications for applying learning algorithms and the implementation of a personalized search will be considered.
Does your search application include a custom query syntax with various search operators such as Booleans, proximity, term or phrase frequency, capitalization, quoted text or as-is operator, and other advanced operators? Although most search applications offer a natural language-oriented search box, some advanced applications may also offer a custom query syntax for advanced users or automated tasks. The Lucene "classic" query operators that are supported by the Solr edismax query parser (Boolean, phrase with slop, wildcard, etc.) cover a good amount of use cases, but they only get you so far. In this talk, we will explore various strategies to support a custom and advanced query syntax in Solr, covering a spectrum of options from leveraging the out-of-the-box Solr query DSL, to a custom Solr query parser, and hybrid solutions in between. We will identify the options' pros and cons, discuss relevancy considerations, and illustrate the options in Java.
Haystack 2019 - Establishing a relevance focused culture in a large organizat...OpenSource Connections
For a relevance engineer one of the most difficult tasks in the tuning process is to convince others in the organization that this is a joint effort. Even the brightest search guru doesn't get very far when working in isolation, so establishing cross-collaboration through the organization is essential. But how to get there?
On top of that, in a large organization a relevance engineer often works on multiple seemingly unrelated search projects. The challenge is not to get drowned in building custom solutions for each project, but to design generic and re-usable strategies which solve many problems at once.
In this session we'll discuss how to build a widely supported basis for search quality improvements in an organization. It is full of practical tips and examples which could help you in establishing a cross-functional culture that is optimal for relevance tuning. It also zooms in on an holistic approach of solving multiple equivalent search issues at once.
Haystack 2019 - Solving for Satisfaction: Introduction to Click Models - Eliz...OpenSource Connections
Relevance metrics like NDGC or ERR require graded judgements to evaluate query relevance performance. But what happens when we don't know what 'good' looks like ahead of time? This talk will look at using click modeling techniques to infer relevance judgements from user interaction logs.
nnual (33 years) study of the Israeli Enterprise / public IT market. Covering sections on Israeli Economy, IT trends 2026-28, several surveys (AI, CDOs, OCIO, CTO, staffing cyber, operations and infra) plus rankings of 760 vendors on 160 markets (market sizes and trends) and comparison of products according to support and market penetration.
Measuring Microsoft 365 Copilot and Gen AI SuccessNikki Chapple
Session | Measuring Microsoft 365 Copilot and Gen AI Success with Viva Insights and Purview
Presenter | Nikki Chapple 2 x MVP and Principal Cloud Architect at CloudWay
Event | European Collaboration Conference 2025
Format | In person Germany
Date | 28 May 2025
📊 Measuring Copilot and Gen AI Success with Viva Insights and Purview
Presented by Nikki Chapple – Microsoft 365 MVP & Principal Cloud Architect, CloudWay
How do you measure the success—and manage the risks—of Microsoft 365 Copilot and Generative AI (Gen AI)? In this ECS 2025 session, Microsoft MVP and Principal Cloud Architect Nikki Chapple explores how to go beyond basic usage metrics to gain full-spectrum visibility into AI adoption, business impact, user sentiment, and data security.
🎯 Key Topics Covered:
Microsoft 365 Copilot usage and adoption metrics
Viva Insights Copilot Analytics and Dashboard
Microsoft Purview Data Security Posture Management (DSPM) for AI
Measuring AI readiness, impact, and sentiment
Identifying and mitigating risks from third-party Gen AI tools
Shadow IT, oversharing, and compliance risks
Microsoft 365 Admin Center reports and Copilot Readiness
Power BI-based Copilot Business Impact Report (Preview)
📊 Why AI Measurement Matters: Without meaningful measurement, organizations risk operating in the dark—unable to prove ROI, identify friction points, or detect compliance violations. Nikki presents a unified framework combining quantitative metrics, qualitative insights, and risk monitoring to help organizations:
Prove ROI on AI investments
Drive responsible adoption
Protect sensitive data
Ensure compliance and governance
🔍 Tools and Reports Highlighted:
Microsoft 365 Admin Center: Copilot Overview, Usage, Readiness, Agents, Chat, and Adoption Score
Viva Insights Copilot Dashboard: Readiness, Adoption, Impact, Sentiment
Copilot Business Impact Report: Power BI integration for business outcome mapping
Microsoft Purview DSPM for AI: Discover and govern Copilot and third-party Gen AI usage
🔐 Security and Compliance Insights: Learn how to detect unsanctioned Gen AI tools like ChatGPT, Gemini, and Claude, track oversharing, and apply eDLP and Insider Risk Management (IRM) policies. Understand how to use Microsoft Purview—even without E5 Compliance—to monitor Copilot usage and protect sensitive data.
📈 Who Should Watch: This session is ideal for IT leaders, security professionals, compliance officers, and Microsoft 365 admins looking to:
Maximize the value of Microsoft Copilot
Build a secure, measurable AI strategy
Align AI usage with business goals and compliance requirements
🔗 Read the blog https://nikkichapple.com/measuring-copilot-gen-ai/
Dev Dives: System-to-system integration with UiPath API WorkflowsUiPathCommunity
Join the next Dev Dives webinar on May 29 for a first contact with UiPath API Workflows, a powerful tool purpose-fit for API integration and data manipulation!
This session will guide you through the technical aspects of automating communication between applications, systems and data sources using API workflows.
📕 We'll delve into:
- How this feature delivers API integration as a first-party concept of the UiPath Platform.
- How to design, implement, and debug API workflows to integrate with your existing systems seamlessly and securely.
- How to optimize your API integrations with runtime built for speed and scalability.
This session is ideal for developers looking to solve API integration use cases with the power of the UiPath Platform.
👨🏫 Speakers:
Gunter De Souter, Sr. Director, Product Manager @UiPath
Ramsay Grove, Product Manager @UiPath
This session streamed live on May 29, 2025, 16:00 CET.
Check out all our upcoming UiPath Dev Dives sessions:
👉 https://community.uipath.com/dev-dives-automation-developer-2025/
Adtran’s SDG 9000 Series brings high-performance, cloud-managed Wi-Fi 7 to homes, businesses and public spaces. Built on a unified SmartOS platform, the portfolio includes outdoor access points, ceiling-mount APs and a 10G PoE router. Intellifi and Mosaic One simplify deployment, deliver AI-driven insights and unlock powerful new revenue streams for service providers.
AI in Java - MCP in Action, Langchain4J-CDI, SmallRye-LLM, Spring AIBuhake Sindi
This is the presentation I gave with regards to AI in Java, and the work that I have been working on. I've showcased Model Context Protocol (MCP) in Java, creating server-side MCP server in Java. I've also introduced Langchain4J-CDI, previously known as SmallRye-LLM, a CDI managed too to inject AI services in enterprise Java applications. Also, honourable mention: Spring AI.
European Accessibility Act & Integrated Accessibility TestingJulia Undeutsch
Emma Dawson will guide you through two important topics in this session.
Firstly, she will prepare you for the European Accessibility Act (EAA), which comes into effect on 28 June 2025, and show you how development teams can prepare for it.
In the second part of the webinar, Emma Dawson will explore with you various integrated testing methods and tools that will help you improve accessibility during the development cycle, such as Linters, Storybook, Playwright, just to name a few.
Focus: European Accessibility Act, Integrated Testing tools and methods (e.g. Linters, Storybook, Playwright)
Target audience: Everyone, Developers, Testers
Fully Open-Source Private Clouds: Freedom, Security, and ControlShapeBlue
In this presentation, Swen Brüseke introduced proIO's strategy for 100% open-source driven private clouds. proIO leverage the proven technologies of CloudStack and LINBIT, complemented by professional maintenance contracts, to provide you with a secure, flexible, and high-performance IT infrastructure. He highlighted the advantages of private clouds compared to public cloud offerings and explain why CloudStack is in many cases a superior solution to Proxmox.
--
The CloudStack European User Group 2025 took place on May 8th in Vienna, Austria. The event once again brought together open-source cloud professionals, contributors, developers, and users for a day of deep technical insights, knowledge sharing, and community connection.
Multistream in SIP and NoSIP @ OpenSIPS Summit 2025Lorenzo Miniero
Slides for my "Multistream support in the Janus SIP and NoSIP plugins" presentation at the OpenSIPS Summit 2025 event.
They describe my efforts refactoring the Janus SIP and NoSIP plugins to allow for the gatewaying of an arbitrary number of audio/video streams per call (thus breaking the current 1-audio/1-video limitation), plus some additional considerations on what this could mean when dealing with application protocols negotiated via SIP as well.
With Claude 4, Anthropic redefines AI capabilities, effectively unleashing a ...SOFTTECHHUB
With the introduction of Claude Opus 4 and Sonnet 4, Anthropic's newest generation of AI models is not just an incremental step but a pivotal moment, fundamentally reshaping what's possible in software development, complex problem-solving, and intelligent business automation.
Unlock your organization’s full potential with the 2025 Digital Adoption Blueprint. Discover proven strategies to streamline software onboarding, boost productivity, and drive enterprise-wide digital transformation.
AI Emotional Actors: “When Machines Learn to Feel and Perform"AkashKumar809858
Welcome to the era of AI Emotional Actors.
The entertainment landscape is undergoing a seismic transformation. What started as motion capture and CGI enhancements has evolved into a full-blown revolution: synthetic beings not only perform but express, emote, and adapt in real time.
For reading further follow this link -
https://akash97.gumroad.com/l/meioex
DePIN = Real-World Infra + Blockchain
DePIN stands for Decentralized Physical Infrastructure Networks.
It connects physical devices to Web3 using token incentives.
How Does It Work?
Individuals contribute to infrastructure like:
Wireless networks (e.g., Helium)
Storage (e.g., Filecoin)
Sensors, compute, and energy
They earn tokens for their participation.
Marko.js - Unsung Hero of Scalable Web Frameworks (DevDays 2025)Eugene Fidelin
Marko.js is an open-source JavaScript framework created by eBay back in 2014. It offers super-efficient server-side rendering, making it ideal for big e-commerce sites and other multi-page apps where speed and SEO really matter. After over 10 years of development, Marko has some standout features that make it an interesting choice. In this talk, I’ll dive into these unique features and showcase some of Marko's innovative solutions. You might not use Marko.js at your company, but there’s still a lot you can learn from it to bring to your next project.
5. A little about me… Senior Consultant, OpenSource Connections in Charlottesville, Virginia Masters in Management of I.T., University of Virginia, McIntire School of Commerce We tweaked our Scrum process to incorporate Range Estimation based on my studies at Uva Please take the Estimation Survey: http://www.surveymonkey.com/s/SWNNYQJ
6. The root of all estimation evil: Single point estimates Chart taken from: Software Estimation , Steve McConnell, Figure 1-1, p6 “ A single-point estimate is usually a target masquerading as an estimate.” -Steve McConnell
7. A realistic estimate distribution Chart taken from: Software Estimation , Steve McConnell, Figure 1-3, p8 “ There is a limit to how well a project can go but no limit to how many problems can occur.” -Steve McConnell Nominal Outcome (50/50 estimate)
8. Reasons we are wrong so often Different information Different methods Psychological Biases The Expert Problem
9. Bias in Estimation Imagine this scenario: “ Can you build me that CMS website in 2 weeks?” How would you respond? What estimate would you give?
10. Bias in Estimation By supplying my own estimate (or desire) in my question, I have anchored your response. This is called “The anchoring or framing trap” “ Because anchors can establish the terms on which a decision will be made, they are often used as a bargaining tactic by savvy negotiators.” From “The Hidden Traps in Decision Making” from Harvard Business Review, 1998, John Hammond, Ralph L. Keeney, and Howard Raiffa
11. You’re not as good as you think “ The Expert Problem” Experts consistently underestimate their margins of error, and discount the reasons they were wrong in the past. Excuses for past mistakes: You were playing a different game Invoke the outlier “ Almost right” defense The Black Swan: The impact of the Highly Improbable , by Nassim Nicholas Taleb, 2007, Chapter 10: The Scandal of Prediction
12. The best protection “ The best protection against all psychological traps – in isolation or in combination – is awareness.” From “The Hidden Traps in Decision Making” from Harvard Business Review, 1998, John Hammond, Ralph L. Keeney, and Howard Raiffa
14. How agile already avoids pitfalls Encourages team airing of estimates Done before assignment of tasks Scrum poker
15. How agile already avoids pitfalls Separates story from time units, more relative Story Points & Velocity Image from: http://leadinganswers.typepad.com/leading_answers/2007/09/agile-exception.html
22. Incorporating range estimation into Scrum Team originally estimated 108 hours Range estimate went from 114-245 hours. Note the single point was a low estimate! They were able to finish original tasks a little early
23. Range estimation in Scrum Poker It’s very simple – just hold two cards instead of one! The same rules apply about creating discussion between low and high estimators, but you might resolve them differently...
24. On the high end Range estimation in Scrum Poker On the low end On the high end The likely discussion: Hey Orange, why do you say “2”? Yellow and Blue both say “5”. Likely Outcome: 3 or 5 Middle of the road
25. Range estimation in Scrum Poker Still middle of the road, but Green recognizes some risk Orange sees this as really easy Blue sees this as more complicated The likely discussion: Orange and Blue need to compare their visions for this task. Likely Outcome: 8-13? Red and Blue no longer agree: Red is confused or sees big risks
32. Why 2/3? Because it is both simple and pessimistic PERT does a similar thing: Expected = [BestCase + (4*MostLikely) + WorstCase] / 6 Source on PERT: Software Estimation , Steve McConnell, p109
34. Using range estimation to communicate risk Size of your range communicates the risk of your task May encourage you to break up tasks, or better define them. Scrum is all about better communication with the customer – so are ranges
35. How long? Um… 2 days 4 days Do you know your fudge factor? You Your Boss Big Boss
36. How long? 2-4 days 2-4 days Ranges help you control your fudge factor You Your Boss Big Boss
37. Another example: Use ranges to better empower your boss or client You Your Boss Big Boss
38. Perfect – Do it! How long? How much for X? GRRR Umm….. You Your Boss Big Boss 2 days Actually … 4 days 4 days later… 2 days * rate Budget Left: 2 days
42. Potential pitfalls of range estimation Really Wide Ranges Not everything can take 2 – 200 hours or you lose all credibility
43. Potential pitfalls of range estimation Bosses who don’t get it You’re going to have to sell them on how your estimates will improve their decision making ability.
44. Potential pitfalls of range estimation Pushed back deadlines Ranges are not an excuse to always miss deadlines. But they do make it less of a surprise, and encourage you to be more cautious.
45. Potential pitfalls of range estimation Is 2/3 the new single-point? Be careful not to just start treating the 2/3 calculated estimate, use the ranges.