Since it is an imperfect world, Storyteller specifications will sometimes fail -- and since Storyteller is generally used for integration scenarios and expressed in high level language that purposely disconnected from the code artifacts, it has been difficult from time to time (read: frequently) to trouble shoot failing specifications. At the same time, Storyteller usage has suffered in the past from performance problems and long testing times. While the Storyteller team has invested a lot of time into performance optimizations in Storyteller itself, the 3.0 release has added some long overdue tooling to measure and expose the performance characteristics of the specifications so that teams can identify where the real throughput problems really are.
StoryTellerAssert
StorytellerAssert.Fail()
as markdown.The original means of adding more contextual information about specification failures is the old StoryTellerAssert
static class, with its usage shown below:
// Fail immediately with this message
StoryTellerAssert.Fail("Something is wrong");
// Fail if the Func<bool> filter is true
StoryTellerAssert.Fail(() => true, "The condition is {0}", true);
// Fail if the boolean condition is true
var value = 1;
StoryTellerAssert.Fail(value > 0, "The number was not supposed to be bigger than one");
While you can always just throw an exception in a grammar to provide more information about invalid conditions, using this functionality will allow you to add readable contextual error messages into the html results to help explain failures without adding an ugly .Net stacktrace.
A classic example of using StoryTellerAssert
is if you have a grammar that is supposed to select a given value in a select box on the screen. If the element is hidden, does not exist, or the desired value is not part of the options, use StoryTellerAssert
to make the normally quiet grammar fail with some kind of contextual explanation of why that action is invalid for easier debugging later.
Try not to use StoryTellerAssert
in place of assertions or facts -- especially when that results in much more work on your part.
Performance Tracing
Out of the box, Storyteller tracks the runtime of each grammar and silent action within the Fixture classes. If you want
to add your own timing records to the performance tracing, you can use the new ISpecRecord.Timings.Record()
method
introduced in Storyteller 4.1:
[FormatAs("Open the browser to {url}")]
public void OpenUrl(string url)
{
var corrected = correctUrl(url);
// This records the amount of time it takes to
// open a Url in a WebDriver controlled browser
using (Context.Timings.Record("Navigation", url))
{
Driver.Navigate().GoToUrl(corrected);
}
}
In the past, this functionality has been used to record:
- HTTP requests handled by the application during a specification
- Messages handled by a service bus
- Page navigations when using Selenium/WebDriver
Exception Formatters
New to Storyteller 4.0 is the ability to tell Storyteller how to format the display of an exception that is caught during the
execution of the specification. The default is still to just dump the full stacktrace, but if you want to get better results, you
can override the formatting for individual Exception
types like these unit tests:
[Fact]
public void render_with_custom_markdown_formatting()
{
var message = "**Dude, you're trying to divide by zero!**";
// This is registering a conversion to markdown text from any
// exception of type DivideByZeroException
ExceptionFormatting.AsMarkdown<DivideByZeroException>(ex => message);
var exception = new DivideByZeroException();
ExceptionFormatting.ToDisplayMessage(exception, out display)
.ShouldBe(message);
display.ShouldBe(ErrorDisplay.markdown);
}
[Fact]
public void render_with_custom_text_formatting()
{
var message = "**Dude, you're trying to divide by zero!**";
// Register a purely textual representation of the exception
// that maintains whitespace
ExceptionFormatting.AsText<DivideByZeroException>(ex => message);
var exception = new DivideByZeroException();
ExceptionFormatting.ToDisplayMessage(exception, out display)
.ShouldBe(message);
display.ShouldBe(ErrorDisplay.text);
}
}
Note that you can register the exception formatting anywhere because ExceptionFormatting
is just a static class,
but idiomatically you probably want those calls in your ISystem
or maybe in the static initializer of your
Fixture
classes.
If it's easier or useful, you can throw a custom failure exception that inherits from the StorytellerFailureException
like this sample below that returns a markdown string representing the error:
public class MarkdownFailureException : StorytellerFailureException
{
public MarkdownFailureException(string message) : base(message, ErrorDisplay.markdown)
{
}
public override string FormattedMessage()
{
return $"**Error:** {Message}";
}
}
Debug Output
If you are compiling to the Debug
profile in your local work, any calls to .Net's built in Debug.WriteLine()
during the execution of a specification will show up in a section at the very bottom of the specification results html.
If you are targetting any Netstandard target with the CoreCLR, you have to use Trace.WriteLine()
instead. But don't worry,
because there's now a Fixture.Trace(string)
function in Storyteller itself that wallpapers over the different like so:
public void WriteTrace(string message)
{
#if NET46
Debug.WriteLine(message);
#else
Trace.WriteLine(message);
#endif
}
The sample below is from the internal specifications on the documentation generation that I introduced to trouble shoot a failing specification one day:
Debug Output
Loading topic file C:\Users\jeremill\AppData\Local\Temp\fc517eaa-8c3b-4d2f-8dc3-e4d0e923bed4\blue.md Parsing topic file C:\Users\jeremill\AppData\Local\Temp\fc517eaa-8c3b-4d2f-8dc3-e4d0e923bed4\blue.md Loading topic file C:\Users\jeremill\AppData\Local\Temp\fc517eaa-8c3b-4d2f-8dc3-e4d0e923bed4\index.md Parsing topic file C:\Users\jeremill\AppData\Local\Temp\fc517eaa-8c3b-4d2f-8dc3-e4d0e923bed4\index.md Loading topic file C:\Users\jeremill\AppData\Local\Temp\fc517eaa-8c3b-4d2f-8dc3-e4d0e923bed4\lightbrown.md Parsing topic file C:\Users\jeremill\AppData\Local\Temp\fc517eaa-8c3b-4d2f-8dc3-e4d0e923bed4\lightbrown.md Loading topic file C:\Users\jeremill\AppData\Local\Temp\fc517eaa-8c3b-4d2f-8dc3-e4d0e923bed4\purple.md Parsing topic file C:\Users\jeremill\AppData\Local\Temp\fc517eaa-8c3b-4d2f-8dc3-e4d0e923bed4\purple.md Loading topic file C:\Users\jeremill\AppData\Local\Temp\fc517eaa-8c3b-4d2f-8dc3-e4d0e923bed4\red.md Parsing topic file C:\Users\jeremill\AppData\Local\Temp\fc517eaa-8c3b-4d2f-8dc3-e4d0e923bed4\red.md
Exceptions
Because we have had some periodic trouble with exceptions not being surfaced correctly through the html results (we swear this is fixed now), there's a built in results tab that just shows the stacktraces for every .Net exception detected by Storyteller during a specification run.
Custom Logging
A relatively new feature in Storyteller is the ability to add custom HTML to the results report. We originally built this functionality to use Storyteller against a system with quite a bit of distributed messaging where we would write a threaded history of the messages sent, received, and handled by the service bus endpoints during the execution. That has been successful enough that Storyteller 3.0 has formalized and generalized that mechanism.
We are now using the instrumentation extensibility shown below to log the HTTP requests handled by our web application during specifications so that our teams have a much better understanding of just what is going on inside of the system during specification execution. We are also using the custom logging of HTTP requests and service bus message handling to understand where there may be opportunities for performance improvement by combining or batching up HTTP requests made from our client to our server.
The entry point to customized logging is the small IReporting
service exposed off of ISpecContext.Reporting
:
public interface IReporting
{
T ReporterFor<T>() where T : Report, new();
void Log(string title, string html, string shortTitle = null);
void Log(Report report);
}
The ISpecContext
service is available as the Context
property inside of Fixture
classes where it can be used by any grammar code like in this Sentences:
public class LoggingFixture : Fixture
{
[FormatAs("Do something that requires custom logging")]
public void DoSomething()
{
Context.Reporting.Log(
"I am making a custom log",
"<h1>I did something</h1>");
}
}
The reporting is also accessible in the IExecutionContext.AfterExecution()
method. We exploit this method to add contextual information from the application's own diagnostics at the end of each specification.
public interface IExecutionContext : IDisposable
{
// BeforeExecution() is a hook to potentially
// set up state or do any kind of custom logging
void BeforeExecution(ISpecContext context);
// AfterExecution is a hook to gather up custom logging
// or to make custom assertions against the specification
// execution
void AfterExecution(ISpecContext context);
T GetService<T>();
}
Another usage of custom reporting to gather up html results in a single section is to use your own IReport
implementation:
public interface Report
{
string ToHtml();
string Title { get; }
string ShortTitle { get; }
int Count { get; }
}
A custom IReport
from Storyteller's internal tests is shown below:
public class ListReport : Report
{
private readonly HtmlTag _ul = new HtmlTag("ul").AddClass("list-group");
private int _count;
public string ToHtml()
{
return _ul.ToString();
}
public ListReport Add(string text)
{
_count++;
_ul.Add("li").AddClass("list-group-item").Text(text);
return this;
}
public string Title
{
get { return "Some Numbers"; }
}
public string ShortTitle
{
get { return "Numbers"; }
}
public int Count
{
get { return _count; }
}
}
In usage, you can share a custom IReport
between grammars and fixtures by using this syntax below:
Context.Reporting.ReporterFor<ListReport>().Add(_number.ToString());
Not that it's very interesting, but here is the html that gets added to the very bottom of the specification results from the logging shown above:
Some Numbers
- 5
- 19
- 17
Using the Specification Timing
New for Storyteller 3.0 is some tracing for the performance of the steps within a specification. At the bottom of specification results html you should find a table of data like this one from an internal Storyteller specification:
Execution Timing all timings in milliseconds
Type | Subject | Start | End | Duration |
---|---|---|---|---|
Specification | Navigation structure for a single directory without explicit ordering | 0 | 23 | 23 |
Context | Creation | 2 | 4 | 2 |
Fixture | TopicFolder:SetUp | 4 | 4 | 0 |
Grammar | TheTopicsAre:Row | 4 | 9 | 5 |
Grammar | TheTopicsAre:Row | 9 | 12 | 3 |
Grammar | TheTopicsAre:Row | 12 | 12 | 0 |
Grammar | TheTopicsAre:Row | 12 | 13 | 1 |
Grammar | TheTopicsAre:Row | 13 | 20 | 7 |
Grammar | rows | 20 | 23 | 3 |
Fixture | TopicFolder:TearDown | 23 | 23 | 0 |
This table tracks the time in milliseconds spent in each step by grammar and all the "silent actions" like creating the execution context and Fixture.SetUp()/TearDown()
methods that happen behind the scenes.
Exporting the Performance Data
To make it easier to spot and diagnose performance and throughput problems with the Storyteller specifications and probably the application itself, Storyteller allows you to dump the performance data shown above to file formats suitable for analysis by other tools.
The raw performance data can be exported from the st run
in a CSV format suitable for easy import to a spreadsheet or database with the --csv
flag like this: `st run [directory] --csv [file name].
The same performance data can be exported to a JSON format with `st run [directory] --json [file name].