Why do YOU avoid Extended Events?

On Monday this week I had an interesting exchange on Twitter with a bunch of folks who are die-hard Profiler/Trace users, and have no interest in using Extended Events.  To wit:

Tweet from Denny about Profiler (used with permission)

Tweet from Denny about Profiler (used with permission)

Now, Denny and I are good friends, and his tweet didn’t upset me in any way, it just got me thinking.  Why are DBAs and developers so resistant to using Extended Events?  I have some theories, but I realized I should collect some data.

Therefore, whether you:

  • Have never tried Extended Events,
  • Have tried Extended Events but would rather keep using Profiler/Trace,
  • Use Extended Events occasionally but still prefer Profiler/Trace,

I want to hear from you.  Whatever the reason, I want to know – so please leave a comment below.  My goal is to understand what the main challenges are so that I can then provide options and solutions, or create Connect items for the product team to address any gaps in functionality.

Extended Events *is* the replacement for Profiler/Trace; it’s not going away.  I really want people to be prepared for the time when Profiler and Trace are removed from the product.  And I want to provide feedback to the SQL Server product team to address limitations that people find in Extended Events.  If the feature is lacking something, we need to work together to create change.

Thanks in advance for your help, and if you haven’t tried XE, or are looking for a refresher, then please attend my webinar next Tuesday, April 5th at 12PM EDT: Kicking and Screaming: Replacing Profiler with Extended Events.  I’d love to see you there and can help get you started with XE!

EDIT 2:37PM EDT: If you are still running 2008 or 2008R2, then our advice has always been to stick with Trace and Profiler.  If you’re running SQL Server 2012 and higher, then I recommended Extended Events.  Why?  Because it wasn’t until SQL Server 2012 that every event from Trace had a comparable event in Extended Events.  So,if your argument is that you don’t want to learn XQuery and XML because you’re on 2008 or 2008R2, I’m right there with you and will tell you that’s fine, wait until 2012 to use XE.

Instant File Initialization: Easier to Enable in SQL Server 2016 (and some updated numbers)

The ability to have SQL Server data files skip zero initialization when they are created or grown has been available since SQL Server 2005.  By default, when you create a new data file in SQL Server, or extend the size of an existing one, zeroes are written to the file.  Depending on the size of the file or its growth, and the type of storage, this can take a while.  With Instant File Initialization (IFI), space is allocated for the data file but no zeroes are written.  Prior to SQL Server 2016, to enable this feature you had to edit the Local Security Policy to give the account that runs the SQL Server service the “Perform volume maintenance tasks” right (from Start | Run, type secpol, within the Local Security Policy expand Local Policies, then User Rights Assignment).  This was a task that DBAs had to perform separate from the SQL Server installation (or have a server admin do it for them), and if you did not make the change before installing SQL Server, then it required restarting SQL Server after making the change for it to take affect.  This has changed with the SQL Server 2016 installation, as you can now select the option “Grant Perform Volume Maintenance Task privilege to SQL Server Database Engine Service” when you specify the service accounts, and this will grant the right to the service account at that time.

Enable Instant File Initialization during SQL Server 2016 installation

Enable Instant File Initialization during SQL Server 2016 installation

There is a potential security risk to using this feature, which Kimberly discusses in her Instant Initialization – What, Why and How? post.  The information presented in her post is still valid and worth the read, but Glenn and I did re-run some tests recently, just to get some current numbers to show the benefits of IFI.  We ran the same four tests that Kimberly ran way back in 2007 (!) on four different sets of storage: two sets of 15K disks (one in a RAID10 array, the other in a RAID1 array) and two sets of flash storage (FusionIO cards).  More information on the storage at the end of the post.  The tests were:

 

1 Create 20GB database
2 Grow existing database by 10GB
3 Restore 30GB empty database
4 Restore 30GB database with 10GB data

 

The tests were run on two different physical servers, both running SQL Server 2014.  Details for each storage system are listed below for reference, and the test results were as we expected:

 

Duration for file modification or database restore, with and without IFI

Duration for file modification or database restore, with and without IFI

 

The time to zero out a file and write data is a function of sequential write performance on the drive(s) where the SQL Server data file(s) are located, when IFI is not enabled.  When IFI is enabled, creating or growing a data file is so fast that the time is not of significant consequence.  The time it takes to create or grow a value varies in seconds between 15K, SSD, flash, and magnetic storage when IFI is enabled.  However, if you do not enable IFI, there can be drastic differences in create, grow, and restore times depending on storage.

Storage Details:

  • 15K RAID10 = Six (6) 300GB 15K disks in RAID 10
  • Flash Drive1 = 640GB Fusion-io ioDrive Duo
  • Flash Drive2 = 2.41TB Fusion-io ioDrive2 Duo
  • 15K RAID1 = Two (2) 300GB Seagate Savvio 15K drives in RAID 1

Note: This post was edited on April 13, 2016, to clarify the storage configuration based on a helpful comment.

SQL Server Setup Error: The directory name is invalid

A few weeks ago I got an email from someone who had attended our Accidental DBA IE class last year, and this person was getting the following error when trying to apply a cumulative update:

SQL Server Setup failure: The directory name is invalid.

SQL Server Setup failure: The directory name is invalid.

The initial email didn’t have a lot of details, so I started asking questions to understand what version was being installed, the environment configuration, etc.  Turns out this was a two-node Windows Server Failover Cluster (WSFC) with multiple SQL Server 2012 instances installed, and one of the instances was still running on the node this person was trying to patch.  To be clear, the two nodes were SRV1 and SRV2, and the instances were PROD-A and PROD-B running on SRV1, and PROD-C which was running on SRV2.  This person was trying to install the cumulative update on SRV2.

Now, those of you that manage clusters may be thinking “Doesn’t this DBA know that the way you do rolling upgrades is by not having any instances running on the node you’re trying to patch?”  Well, not everyone is an experienced DBA, a lot of people are Accidental or Junior DBAs, and if this is the first cluster you’re supporting, you may not know that, or understand why.  Further, when you update a single node on a stand-alone server (one that’s NOT in a cluster) it’s not like you shut down the instance yourself and apply the CU, right?

We checked the summary installation log, located in C:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log and found the following Exit message:

The directory ‘M:\a13e546ad3e0de04a828\’ doesn’t exist.

The M drive was a resource for PROD-C, along with the N drive.  There was also a quorum drive (Q) and the local C drive.  So how was M not available?

Well, it was initially, when the install started, and when the installer runs, it puts the files on the first network drive that it finds (if it’s an administrative installation), or the drive with the most free space (see: ROOTDRIVE property).  In this case, the M drive met the criteria.  When the installer then stopped the instance and took the cluster disks offline, the M drive was suddenly gone, hence the invalid directory.

You could argue that this is a bug…maybe…but the solution I suggested was to move PROD-C over to the SRV1 node, then run the installation.  You could also specify the directory as part of a command-line install, therefore using a different disk, but downtime was permitted in this scenario, so the failover wasn’t a deal-breaker.  Once this was done, the installation ran fine, and the latest CU was applied on that node.  The DBA then went through the process of failing all the instances over to the patched node, and then applying the CU on SRV1.

As an aside, if you’re not sure of the current service pack, cumulative update, or hotfix available for your SQL Server version, I recommend this site which has all versions and releases and links to the downloads.  And, for those of you running SQL Server 2014, CU5 for SP1 just came out yesterday and has some interesting fixes (see https://support.microsoft.com/en-us/kb/3130926).

 

Collection of Baseline Scripts

The topic of baselines in SQL Server is one that I’ve had an interest in for a long time.  In fact, the very first session I ever gave back in 2011 was on baselines.  I still believe they are incredibly important, and most of the data I capture is still the same, but I have tweaked a couple things over the years.  I’m in the process of creating a set of baseline scripts that folks can use to automate the capture of this information, in the event that they do not have/cannot afford a third-party monitoring tool (note, a monitoring tool such as SQL Sentry’s Performance Advisor can make life WAY easier, but I know that not every can justify the need to management).  For now, I’m starting with links to all relevant posts and then I’ll update this post once I have everything finalized.

These scripts are just a starting point for what to monitor.  One thing I like to point in our IEPTO2: Performance Tuning and Optimization course is that there is A LOT of data you can capture related to your SQL Server environment.  Your options include Performance Monitor (using Custom Data Collectors), queries via Extended Events or Trace (depending on version), and any data from the DMVs or system views within SQL Server. You have decide what to capture based on

1) What problem you might be trying to solve in your environment, and

2) What information is most important for you to have.  Start simple, and then work your way up.

Figure out the one or two most critical things to capture, and start there, and then add on.

If you find there’s something missing from my scripts, let me know and I’ll try to get it added!

Monitoring in general

Configuration Information

Disk and I/O

Maintenance

Errors

Performance

Pluralsight

Capture Blocking Information with Extended Events and the Blocked Process Report

I am a big fan of Adam Machanic’s WhoIsActive script, and when customers have issues with performance, it’s one of the first tools I recommend because it’s so simple to use and provides great information.  Very often it helps with quickly determining an issue, but sometimes there’s a need to capture more information, particularly when locking and blocking is part of the issue.  Adam’s script has an option to include blocking information, for example including the [blocking_session_id] column in the output and using @find_block_leaders = 1as an parameter.  But sometimes you need more information, like the blocked process report.  I’ve found one of the easiest ways to get that in SQL Server 2012 and higher is Extended Events.  If you’re running SQL Server 2005 and higher, you can use Event Notifications to capture the blocked process report.  This option is nice because you are notified when the problem occurs.  For those of you using SQL Server 2008R2 and below, you also have the option of capturing the blocked process report event through a server-side Trace.  But if you’re on SQL Server 2012 and higher, you can use Extended Events and the blocked process report.  Note: the blocked_process_report event does not exist in SQL Server 2008 or SQL Server 2008R2, which is why Trace is the method there.  The drawback to Extended Events is that you don’t get a notification that blocking occurred, but for those who are not as comfortable with Event Notifications – for whatever reason – Extended Events is a very simple alternative.

The Setup

In order to capture a blocked process report, you must have the blocked process threshold system configuration option enabled.  A good starting value is 15, which is the threshold in seconds at which the report is generated.  To set this value, run the following code:

EXECUTE sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
EXECUTE sp_configure 'blocked process threshold', 15;
GO
RECONFIGURE;
GO
EXECUTE sp_configure 'show advanced options', 0;
GO
RECONFIGURE;
GO

The following code will create the event session and then start it.  Note that you can create the event session and just have it defined in your system without running it.  Then, if you start to have blocking you can set the blocked process threshold and start the event session.

/*
check to see if the event session exists
*/
IF EXISTS ( SELECT  1
FROM    sys.server_event_sessions
WHERE   name = 'Capture_BlockedProcessReport' )
DROP EVENT SESSION [Capture_BlockedProcessReport] ON SERVER;
GO

/*
create the event session
edit the filename entry if C:\temp is not appropriate
*/
CREATE EVENT SESSION [Capture_BlockedProcessReport]
ON SERVER
ADD EVENT sqlserver.blocked_process_report
ADD TARGET package0.event_file(
SET filename=N'C:\Temp\Capture_BlockedProcessReport.xel'
)
WITH (MAX_MEMORY=8192 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,
TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF);
GO

/*
start the event session
*/
ALTER EVENT SESSION [Capture_BlockedProcessReport]
ON SERVER
STATE = START;
GO

Capturing Data

Once the event session is started, then you just wait until the blocking occurs.  The following code can be used to generate an example in your test/dev environment:

/*
create a table and insert
one row without committing
*/
USE [tempdb];
GO

CREATE TABLE [BlockingTest] (
[ID] INT IDENTITY(1,1) PRIMARY KEY,
[INFO] VARCHAR(10)
);
GO

BEGIN TRANSACTION
INSERT INTO [BlockingTest] ([INFO]) VALUES ('SQLskills');
GO

/*
run the following statement in a different window
*/
USE [tempdb];
GO
SELECT *
FROM [BlockingTest];

GO

After about 15 seconds, run the following code back in the original window:

/*
clean up (run in original window)
*/
USE [tempdb];
GO
COMMIT;
GO
DROP TABLE [BlockingTest];
GO

You can then stop the event session, and either leave it there until you need it again, or drop it entirely:

/*
stop the event session
*/
ALTER EVENT SESSION [Capture_BlockedProcessReport]
ON SERVER
STATE = STOP;
GO

/*
drop the event session
*/
DROP EVENT SESSION [Capture_BlockedProcessReport]
ON SERVER;
GO

Viewing the Report

To view the output from extended events you can open the .xel file in Management Studio or query the data using the sys.fn_xe_file_target_read_file function. I typically prefer the UI, but there’s currently no great way to copy the blocking report text and view it in the format you’re used to.  But if you use the function to read and parse the XML from the file, you can…

SELECT
    event_data.value('(event/@name)[1]', 'varchar(50)') AS event_name,
    event_data.query('(event/data[@name="blocked_process"]/value/blocked-process-report)[1]') as [blocked_process_report]
FROM
(
    SELECT CAST(event_data AS XML) AS event_data
    FROM sys.fn_xe_file_target_read_file('C:\Temp\Capture_BlockedProcessReport*.xel', NULL, NULL, NULL)
) AS sub;
GO

Depending on how long you let the blocking continue, you may have captured more than one event and therefore have multiple reports in the output:

Retrieving the blocked process report

Retrieving the blocked process report

 

You can click on the output for any row to see the blocked process in XML format, and then work through the blocking:

The blocked process report

The blocked process report

[Huge thanks to Jonathan for help with the XML.  I don’t think XML and I will ever be friends.  Geez.]

Summary

If you’re in need of the blocked process report and running SQL Server 2012, you now have another option for getting that information.  If you’re still new to extended events, check out the first two stairways in my XE series on SQLServerCentral

Happy New Year!

Taking Risks

risk \’risk\ noun : the possibility that something bad or unpleasant (such as an injury or a loss) will happen

[reference: http://www.merriam-webster.com/dictionary/risk]

There are risks in life every day.  Some we see very clearly.  Others we don’t even notice.  Some are related to relationships with family and friends.  Some are related to our careers.  And some involve the hundreds of other components in our daily lives.

When I first started attending user group meetings in Cleveland, every month Allen White would say, “If you are interested in speaking, please consider submitting.  Everyone has something to share, and everyone else has something they can learn from you.”  I admit, at first I kind of thought it was just rhetoric.  I was wrong.  If you know Allen, you know that he really means it when he says it.  And I know he’s right.  I love asking people what they do in their job every day, because rarely do people do the same thing (especially in the SQL Server world) and I always learn something new.  Everyone in the SQL Server community is extremely well-versed in some SQL Server topic – enough so that they could put together a presentation and talk about it for an hour.  But many don’t, for a variety of reasons.  Some people just have no desire to speak in front of a group, and that’s fine.  You can share knowledge in other ways (hello blog posts).

But for those of you that have considered speaking, or are just a little bit interested, I give you:

Evelyn Maxwell

I tweeted about her SQLSaturday Cleveland submission yesterday (it’s on Improving Your PowerPoint Skills, in case you didn’t click through), but a lot of people aren’t on Twitter so I wanted to mention it here, particularly because many people commented that if a 7th grader has the chutzpah (my word, not anyone else’s) to submit to a SQLSaturday, then others can too.  Yes.  Exactly yes.

Now, Evelyn’s not all alone, her dad is David Maxwell (who just won speaker Idol at the PASS Summit) and I’m sure she’s getting some guidance from him.  Anyone who is speaking at a SQLSaturday for the first time is hopefully getting some mentoring – it’s a daunting task to take on all alone!  But if you want to try it, then do it.  Submit to your local SQLSaturday.  Find a mentor.  Take that risk.  I know there’s a fear of failure there.  Your session may not get accepted.  Evelyn’s may not, and she knows that.  But she tried.

Fly...  photo credit: Jonathan Kehayias

Fly… photo credit: Jonathan Kehayias

 

 

Use of the C: drive by the Profiler UI and XE Live Data Viewer

I had an email from a fellow MVP this week who is in the process of learning Extended Events (hooray!). One question this person had was whether Extended Events had the same issue as Profiler where the C: drive can be heavily used and potentially run out space.

To clarify, with regard to Profiler UI, if you are using the UI to capture events (not a server side trace that writes to a file which is the preferred method), the Profiler UI does file caching of events locally when it runs against a SQL Server instance. It also performs caching when reading an existing file. These cached events are stored on the C:\drive by default, unless you have changed the User TMP location in Environment Variables (Control Panel | System | Edit the system environment variables | Advanced | Environment Variables… ):

Accesing the User TMP variable

Accesing the User TMP variable

Depending on what events you have configured for Profiler, your filter(s), the workload, and how long you run Profiler, you could generate more events than the UI can handle. Therefore, they’ll start buffering to the User TMP location. If you’re not paying attention, you can fill up the C: drive. This can cause applications (including SQL Server) to generate errors or stop working entirely. Not good.

Reference: https://msdn.microsoft.com/en-us/library/ms174203.aspx

Now, back to the original question. Does the same problem exist for Extended Events? Only if you’re using the Live Data Viewer.  After you have an event session created (you can just use system_health for this example), within Management Studio, go to Management | Extended Events | Sessions, select the session and right-click and select Watch Live Data:

Using the Live Data Viewer in XE

Using the Live Data Viewer in XE

As events are captured, they will show up in the data view. As with the Profiler UI, the number of events that appear will depend on the session configuration and the workload. The Live Data Viewer will only show a maximum of one million (1,000,000) events. Once that number has been exceeded, it will start to cache events to the User TMP location, just like the Profiler UI. And just like the Profiler UI, that can fill up the C: drive if that is still the User TMP location. Note that the Live Data Viewer will automatically disconnect and stop displaying events if the engine determines it’s negatively affecting performance. If the event session’s internal memory buffers fill up and the events cannot be dispatched to the event stream for consumption, the engine will disconnect the UI from the event stream. –

[More on the viewer if you’re interested: Introducing the Extended Events Reader]

There are two action items for you:

  1. Don’t use the Profiler UI unless it’s a NON-Production environment.
    1. If you refuse to give up that woobie (name the movie reference) at least change the User TMP location to something other than C:
  2. If you use the Live Data View in Extended Events for event sessions that generate a lot of events, change the User TMP location to something other C:

Shout out to Jonathan for a review of this one.

 

 

 

Be the change

My day had a wonderful start…hot yoga at 6AM with one of my favorite instructors. Homemade cinnamon rolls for breakfast.  Great progress made with some client work. Then I read the email from Tom LaRock and the post from Wendy Pastrick regarding a harassment issue at this year’s PASS Summit. My heart is heavy.

I’ve read both texts multiple times. This line from Wendy keeps sticking in my head, “I declined, telling him I was fine.”

I understand. I’ve been there. And it’s an awful place. Any person who has been harassed knows this. Whether the harassment was physical – having someone grab your ass (or part of your body) is never funny, whether it was verbal – a sly comment with a lewd look that makes you go “ew”, doesn’t matter. The emotional response that comes with it is the indicator that you are not fine, and that you need to do something.

Very often we are taught to not “rock the boat.” Pull up your boots, put it behind you, and move on. It’s as if there is shame in experiencing that discomfort, and we must wholeheartedly deny that. If we do not, when we do NOT call out the offender, we let the offense continue. That person does it to someone else, who may or may not speak up, and the cycle continues.

I applaud Wendy for realizing that she was not fine, and for reporting it. For anyone who might think she over-reacted, I’ll strongly tell you to sit down and just stop. If you have ever experienced that feeling of discomfort, where your body temperature rises and you feel embarrassed – EVEN THOUGH YOU DID NOTHING WRONG – then you have been harassed. And if shame or fear has stopped you from saying anything, then I ask you – not encourage you, but implore you – to act differently if it occurs again. Do not wrap up those feelings inside a blanket and hide in a corner. Be brave and step forward.

I believe in going to any event with someone you trust – particularly events at the PASS Summit because there are so many people and because it’s a city where you probably don’t live. That person that goes with you is your wingman. You have his/her back, he/she has yours. You never, ever leave your wingman (if that sounds familiar, yes I’m quoting Top Gun…those pilots are on to something). If what happened to Wendy happens to you, you go right to your wingman. Do not say that you are fine. Let your wingman help you figure out next steps. One of those steps is reporting the event to PASS (or the proper governing body – HR, another Board – depending on the event) because this behavior will not change unless we begin to speak out and condemn it.

I leave you with this:

Be the change you wish to see in the world.

Be the change you wish to see in the world.

It starts with each one of us. Wendy has taken that path. In the unfortunate event that this happens to you, I hope you will follow.

Gratitude

With Thanksgiving just around the corner, I wanted to write about the appreciation I have for some colleagues I’ve had throughout the years, as well as several that I have now.  We often take time at the end of November to think about the things for which we are thankful.  And while that’s a very good thing, my goal this year is to take it one step further and make sure I tell the people in my life that I am grateful for them, and why.

This week at our Immersion Event, I went running on Wednesday morning with a few attendees. One of them mentioned that he had three daughters, and wondered whether it was worth encouraging them to go into IT as he noticed that in the Immersion Event classes the ratio of women to men was pretty low. He suggested that perhaps IT wasn’t a great place for women. I immediately said that I would absolutely recommend it. There may not be a lot of women in IT, but that doesn’t mean that it’s not a good place for them and won’t continue to be.  And things are always changing.  Getting women into IT, and retaining them, is a continual conversation we have, particularly in the SQL Server community.  Perhaps I’m unique, but I don’t need to work with a large percentage of women or majority of women, to feel comfortable with my team. Perhaps that’s because I’ve never encountered some of the issues that I’ve heard from other women in technology.  The issues where a male colleagues was not supportive, perhaps purposely kept a female colleague out of the loop, was very negative, or maybe even avoided or ignored female teammates entirely.  I don’t know if I’ve been lucky, or if, when I’ve encountered those men, didn’t take it personally, figured that person was just a jerk, and just figured out how to work through it.  There are jerks everywhere – within IT and out of it – and those jerks can be men or women.

I have been fortunate to have supportive individuals, both male and female, in every job I’ve ever had, in both leadership and peer roles.  Maybe I’m unique, maybe not. But I’d like to take a minute to thank the individuals who supported me, and who continue support and stand up for me and for other women in IT, and in their lives. To those of you who have daughters…my unsolicited advice is to absolutely encourage them to go into IT if that is something in which they are interested. And I would encourage fathers and mothers to reach out to women and men in the field already to ask for guidance and mentorship. Most people are happy to provide their experience and any insight they have, you just need to ask.

I won’t list all of the colleagues to whom I am grateful, there are just so many and I’m afraid I might miss someone.  But if I’ve ever looked at you and said “thank you”, or given you a handshake or a hug with a “thank you”, or sent you an email or tweet with those words, or mentioned you in a post here or a post on Facebook, then YOU are one of those people who I appreciate, who I value, and for whom I am grateful to have in my circle.  Thank you, and Happy Thanksgiving.

Filtering Events in Trace/Profiler and Extended Events

It seems very wrong to write a post that talks about Trace, after all I’ve done to advocate that everyone start using Extended Events (XE). However, I know there are a lot of you who still use Trace because you’re running SQL Server 2008R2 and earlier, so you all get a free pass. Until you upgrade. If you’re running SQL Server 2012 or higher, I strongly recommend that you use XE. But that’s the not the main point of this post. What I really want to do is step through filtering a .trc or .xel file to remove selected events.

Trace

Now, if you’ve worked with Trace for a long time, you may be wondering why you would ever filter events because, let’s be honest, you might not know you can do that (I didn’t for ages).  You can! The Profiler UI isn’t where you typically perform data analysis, but one reason you might filter out events is if you’re using Distributed Replay and you need to remove events to avoid generating errors during the reply. To do this, open the .trc file in the Profiler UI, then select File | Properties… Within the Trace File Properties window, go to the Events Selection tab, then select Column Filters… Within the Edit Filter window, you can chose a column (or multiple columns) on which to filter your data. If I want to remove all events before a specific time, I would edit the EndTime:

EndTime filter in Trace

EndTime filter in Trace

This removes all events before 11:31PM on November 12, 2015. After you have removed the events, you can save the remaining data as a new .trc file through File | Save As | Trace File…

Extended Events

Filtering events from an Extended Events file is even easier. Open the .xel file within Management Studio, then select Extended Events | Filters (you can also select the Filters icon in the Extended Events toolbar). Within the Filters window, you can choose to filter on time, and/or any of the other fields captured for the events:

Date and logical_reads filter in Extended Events

Date and logical_reads filter in Extended Events

Once you select Apply, all events before 11:31PM will be removed, as well as those with less than 1000 logical_reads. The remaining events can again be saved to a new file (Extended Events | Export to | XEL File…), or you can just run analysis against the filtered data. You can remove the filter at any time by going back to the Filters window and selecting Clear All.

Summary

Hopefully this helps if you ever need to remove events from a .trc or .xel file.  Note that I always save the filtered data as a new file – I prefer to keep the original just in case I need all the events for some reason.