A recent project necessitated the use of a link shortener, which I had never used before, so I first turned to http:://goo.gl to see what was involved. I was pleasantly surprised with the ease of integrating Google’s API into my code. The following screenshots detail this simple process.
1. Get your API key from your Google API console
2. Turn on the link shortening service via the console
I am making an AJAX call from jQuery here, but the important part is that the response comes back as an array. If the “error” index is set, there was a problem. Otherwise, your shortened URL will be contained in the “id” index.
Starting a new, personal project in PHP, I wanted to try out a new framework for the fun of it. After doing a bit of review on many available frameworks, and ruling out the ones I have already used (Zend and OpenAvanti), I settled on CakePHP.
Installation and basic setup were a breeze, and I found the online documentation and tutorials helpful. Very early on, however, as I thought I was still trying to accomplish basic site setup tasks, I began to run into a few road blocks. Luckily, the API documentation is quite thorough and a pleasure to read, so I eventually got through the initial “new framework pains”. In this post, I will walk you through my basic setup and hopefully answer a few questions that may pop up for you after those initial inquiries into the docs.
Here is what I wanted to accomplish (This example isn’t meant to showcase fantastic architecture. It’s just to serve the purpose of making sure I knew how Cake was going to handle things. And believe me, it took several tries):
Have a public home page with a link to the admin panel
The admin panel requires user login
Simple, right? So let’s get started. Note that I’m using CakePHP 2.0.
The first thing we want to do is tell Cake that we intend to use the built-in Authorization Component. I chose to do that in the AppController, so that it is available to all controllers. Copy the default AppController file from the lib/Cake/Controller folder into the app/Controller folder. Within the class declaration, add the following code:
var $components = array('Auth', 'Session');
The $components variable takes an array of components, one of which we’ve specified as Auth.
Next we want the site to use our home controller as the default. Cake comes pre-built to use the pages controller, specified in the app/Config/routes.php file:
Now that we are setup to view home/index once we navigate to our site, let’s define the home controller in app/Controller/home_controller.php:
class HomeController extends AppController {
var $uses = null;
function beforeFilter() {
$this->Auth->allow( 'index' );
}
function index() {
}
}
Since the Auth component is in force for all controllers, we need to tell it to relax for the home/index action. We do this with the beforeFilter() function.
Also note the setting of $uses to null. Controllers usually have this set to the table they will be referencing. In this case, it makes no sense to have a “home” table in the database, so we must tell the controller that we are not going to be using a table.
Now we just need a view to show our admin panel link (located in app/View/home/index.ctp):
<p>This is the home/index view</p>
<p>
<a href="/admin">Admin Panel</a>
</p>
If you navigate to your site, you should see the admin panel link.
What we want to happen here is for a login screen to display if the anonymous user clicks the admin panel link. We’ve already told Cake that we are using Auth. Now we just have to do a couple of things to hook up a login screen and authenticate a user.
First, Auth is expecting a users controller (app/controller/users_controller.php):
class UsersController extends AppController {
var $name = 'Users';
function login() {
if ($this->request->is('post')) {
if ($this->Auth->login()) {
return $this->redirect($this->Auth->redirect());
}
else {
$this->Session->setFlash(__('Username or password is incorrect'), 'default', array(), 'auth');
}
}
}
function logout() {
$this->redirect( $this->Auth->logout() );
}
}
Note: To insert a password into your database that can use for login, use Cake’s built-in AuthComponent::password(‘your_password’) function to determine the hash.
Let’s setup the admin panel. Create the admin controller (app/Controller/admin_controller.php):
class AdminController extends AppController {
var $uses = null;
function index() {
}
}
And the view (app/View/admin/index.ctp):
<p>This is the admin panel</p>
Now if you click the admin panel link on the home page, you should be greeted by a login screen.
After a successful login, you should then see the admin panel.
So that is how I got a very basic setup running with CakePHP. Check out the great documentation at the official site if you run into trouble, and even start browsing the API. It’s good stuff.
While shopping the LEGO website the other day, I ran across a free application they provide called LEGO Digital Designer. With it, you can assemble bricks from the included palette, then package and upload your creation back to the site. This looked pretty fun so I gave the software a whirl.
Now, I am not a professional 3D modeler, but I do have experience in a few tools, such as Blender, and expect a few necessities in a 3D modeling application. LEGO Digital Designer does not have them. But that’s OK, I thought. I’ll just play around with it.
For an experiment, I decided to model the building I work in, 25 Ottawa SW, Grand Rapids, MI, the home of Mindscape at Hanon McKendry. My goal was to capture the feel of the building (being that it is LEGO and not a precise, 3D replica) without getting into too much detail. Well, that’s hard for me to do, and before too long, I found myself agonizing over placing bricks exactly where they should be to accurately reflect the building’s layout. Then I would take a step back, take a deep breath, and remind myself that I’m just capturing the building’s essence. Then it would get fun again.
However, the application does have a lot of shortcomings, and as soon as the brick count started rising, I found it more and more difficult to place bricks, even in seemingly simple scenarios. Time to put this together started skyrocketing and I decided to call it quits for now before this turns into a lifelong project. I may have to look into other tools such as LDraw and see what they have to offer.
That being said, this is where the model stands right now, at 4,368 bricks. This view is from the Southeast (you can click the pictures to see the full-size view).
Here is a view from the Northeast.
As you can see, it captures the building’s essence on a basic level. Here are a few more detailed views.
In this view, we’ve zoomed in to look through the first floor to see the Mindscapers hard at work in “the pit”. If you look through the small window on the right, you can even see the bearded Matt, hard at work on a website.
Here we are taking a bird’s eye view through the roof. You can see the entire “pit” on the first floor, and several other items under construction. The stairwell and elevator shafts are positioned, and the Skywalk has been run through the building and ready to be connected to the adjacent buildings.
It would be nice to finish this, especially to add in the rest of Mindscape and Hanon McKendry on the sixth floor, and to build out 25 Kitchen on the first floor. But I think I’m going to need a new tool. Of course it would be nice to build it out of real LEGO, but I would probably have to win the lottery to buy 5,000+ bricks. Has anyone out there had experience with any other digital LEGO tools?
I was reading a blog post today about how a few companies are collaborating on a new app for NASA that will integrate the playability of an MMORPG and the coolness of real-world science. This sounds incredible to me and I can’t wait to try it. The one thing that irked me though was how when things like this come up, the cliche thing for marketers, journalists and the media to say is, “it will make learning fun” or “your kids won’t even know they are learning”.
News flash to those out there that think you have to disguise learning: LEARNING HAS ALWAYS BEEN FUN AND PEOPLE ENJOY IT.
With few exceptions, everyone loves to learn. Learning new things is what makes life exciting instead of the same old thing everyday. Talk to almost any kid in elementary school and they enjoy going to school and absorbing loads of new information. Post-elementary? Well, that’s where the excitement begins to diminish. Why?
That brings me to my theory of where this popular “It’s OK, they won’t know they’re learning” phrase came from. It probably stems from many kids’ attitudes toward post-elementary school. Learning isn’t a drag, school is. Allow me to elaborate.
The School Setting
Do you remember what it was like walking into your elementary school classrooms? I remember there were heavily decorated walls. Posters of planets, ecosystems, history and far-off places invigorated the mind. Art projects hung from the ceiling. Who wouldn’t get excited to dive into knowledge in a setting such as this? These are the best years of school.
This week we took our kids to their school open houses. As expected, the classrooms for our two youngest in elementary school were comfortable, full of creative inspiration and poised to take in a roomful of energetic kids. Then came the middle school for our two oldest kids. What did those classrooms feel like? In a word, boring. White cinder-block walls. One or two posters. Nothing hanging from the ceiling. Desks in rigid rows. Cold and sterile. Wow. Let’s open up that social studies book and have fun!
Why can’t we carry that “elementary school room” mentality throughout middle school, high school and beyond? Do we think decorations and models are childish? I don’t understand. Why is it that the system thinks we don’t need to be fully stimulated anymore as we grow older?
Following the Rules
One of my most negative memories of school was during a high school English class where we were given a short story writing assignment. At that time I had a favorite author who I liked to emulate. One of his popular writing techniques was to end a paragraph with a very short “sentence” for impact. For example, the end of the paragraph might read, “Very cold.” Now, you know and I know that this is not a complete sentence in the sacred, formal rules of grammar. So what? I loved how it sounded and how it made me feel when I read it (I actually used a few in this post, ahem).
“No. That is not allowed. I’m marking points off”, was the teacher’s response. That’s interesting. I wonder if the author, who made millions of dollars from his books writing that way, realized his blatant disregard of the English language with his sentence fragments?
My point is that creativity is stifled when students aren’t allowed to push boundaries and bend rules. Isn’t history full of examples of greatness when so called “rules” are not followed precisely?
At school, things may have changed. The chairs were in rows, and tree trunks were to be colored brown, not purple. If you lived in a world of purple tree trunks, you probably learned to hide it.
Encourage Learning. Encourage Fun.
Our education system should do everything it can to keep our students excited for the long term. I think two very simple steps could be taken right now. Make the school settings engaging throughout all levels of education and encourage creativity and rule bending.
Learning is not a drag. School is. Let’s change that.
So many times I find myself, as a programmer, diving into a problem with an initial thought of a solution that ends up being way too complex. I don’t know if it’s programmers in general or just me, but it seems that as I gain more and more coding knowledge and can take on more and more challenging problems, my brain can fall into a complexity trap. What I mean by that is I now have numerous tools (algorithms, patterns, etc.) at my disposal, and at times I tend to attack a problem with one or more of these tools without stepping back, taking a deep breath, and determining which is the simplest way to get something done.
These solutions usually work, but the code can become messy and hard to maintain versus the simple way that I failed to see at the beginning.
I could probably make this a series of articles, as unfortunately, I tend to do this more than I’d like. But perhaps, writing this down, will force me to think about this issue more and prevent further frustrations.
OK, so now for a real world example. I’ve been recently working on an admin panel for a web app using PHP and Javascript with the Prototype.js library. There is a section of the admin panel where the user can create polls. The user enters a question along with one or more answers and can then re-sort the answers if desired. The re-sorting issue is where I started losing control.
Each answer is stored in an answers table in the database, and to keep it simple, we’ll just define three columns:
id
answer
sort_order
For the UI, the answers look something like this:
As you would expect, the up and down arrows allow you to move the answers up and down the list, effectively changing their sort order in the database.
So I immediately started down the road of reacting to the click of an arrow button by determining the current order of answers, swapping the sort orders in the database, getting the new order and manipulating the HTML markup to show the new order. Here’s the code (Note: this is just for moving an answer up the list. I had a separate, similar function for moving an answer down. I hadn’t refactored yet):
function upAnswer(id) {
//get sort order for this row
var sortOrder = $('orderAnswer_' + id).innerHTML;
//if this is the first row, no need to move it up
if (sortOrder == lowestOrder) {
return;
}
//find the row immediately above the selected row
var idAbove = 0;
$$('.answerRow').each(function(i) {
var checkId = i.readAttribute('id').substr(7);
if ( checkId == id ) {
throw $break;
}
else {
idAbove = checkId;
}
});
// swap the display orders in the database
new Ajax.Request('/AdminPoll/SwapAnswers/' + id + '/' + idAbove, {
onSuccess: function(response) {
//store the answer this is going to be moved
var moveAnswer = $('answer_' + id);
//store both display orders
var origOrder = $('orderAnswer_' + id).innerHTML;
var targetOrder = $('orderAnswer_' + idAbove).innerHTML;
//insert the moved answer in the right place
$('answer_' + idAbove).insert({before:
'
' + moveAnswer.innerHTML + '
'
});
//remove the moved answer
moveAnswer.remove();
//swap the display orders on the page
$('orderAnswer_' + id).update(targetOrder);
$('orderAnswer_' + idAbove).update(origOrder);
//hook up the action buttons
assignActions(id);
},
onFailure: function(response) {
alert('Unable to move answer');
}
});
}
The code above can be summarized in the following steps:
Determine the sort order for the row just clicked
Since we are in the “move up” function, exit the function if this row is already at the top
Loop through the rows to find the row immediately above this one
We now know the two rows involved in the reordering, so call the database to swap the two sort_order values
Copy the current row and move it to the position above the row just swapped with
Delete the original answer
Since we only moved the markup, call the assignActions function which wires up all of the events to the new input control and buttons
Yikes. Knowing that this was getting ugly, fast, I stepped back and looked at it again. Two key concepts drove the next iteration:
There is no need to maintain the actual sort orders. Just make sure the answers stay in order. For example, if the current order is
[answer1 => 5, answer2 => 8, answer3 => 9]
and we are going to swap the first two answers, the new order does not have to be
[answer2 => 5, answer1 => 8, answer3 => 9]
It can be
[answer2 => 1, answer1 => 2, answer3 => 3]
Change thinking from altering the database first then the markup. Instead, swap the two rows in markup, determine what happened, then send the new order to the database.
Here’s the new code:
function moveAnswer(id, moveUp) {
//get the answer element that is going to be moved
var answerSource = $('answer_' + id);
//get the current order of the list
var answerOrder = new Array();
var iCounter = 0;
$$('.answerRow').each(function(i) {
answerOrder[iCounter++] = i.readAttribute('id').substr(i.readAttribute('id').indexOf('_') + 1);
});
if (moveUp) {
//make sure that the answer the user wants moved isn't already at the top of the list
if (id == answerOrder[0]) {
return;
}
else {
//get the element above the source element
var answerTarget = $('answer_' + id).previous();
answerTarget.insert({
before: answerSource
});
}
}
//move down
else {
if (id == answerOrder[answerOrder.length - 1]) {
return;
}
else {
//get the element below the source element
var answerTarget = $('answer_' + id).next();
answerTarget.insert({
after: answerSource
});
}
}
resortAnswers();
}
function resortAnswers() {
//get the order of the answer ids
var answerOrder = new Array();
$$('.answerRow').each(function(i) {
if (i.readAttribute('id') != null) {
answerOrder.push(i.readAttribute('id').substr(i.readAttribute('id').indexOf('_') + 1));
}
});
if (answerOrder.length > 1) {
new Ajax.Request('/AdminPoll/SortAnswers/' + answerOrder.join(), {
onSuccess: function(response) {
},
onFailure: function(response) {
alert('Unable to sort answers');
}
});
}
}
First, note that this function handles moves both up and down by passing in a boolean. Here is a summary of the new code:
Get the answer that’s going to be moved
Get the current order of answers
If the answer is not at the end of the list already, insert it before or after the answer next to it, depending on which way we are moving
Call the database to store the new order
Ahhh, this feels so much better. As you may have noticed, the new code really isn’t any smaller (although there’s still improvement to be made), but it is much cleaner and handles the resorting more efficiently.
A couple of key indicators that I’m about to fall into the complexity trap and should stop me in my tracks are:
The code is getting ugly
The nagging feeling that there’s probably a function out there to handle something I’m trying to work out (in this case, Prototype’s next() and previous() functions)
It felt good to get back in there and clean up that code. But if I can be more aware of the warning signs, maybe I can do it right in the first place.
I ran into a challenge the other day where I had to calculate how much space was left between an HTML element and the bottom of the browser window. We were using Google’s Search-As-You-Type code (http://code.google.com/p/search-as-you-type/) which, according to a fellow developer, “worked like a dream”. He then handed it off to me to implement in another section of the application.
He had been using the search bar at the top of the page with no problems, whereas I needed it further down in a form. I dropped in the code and found that the JavaScript was somehow not calculating a dimension correctly. Depending on where you had scrolled the page to, the search box would either shrink the height of the dropdown results to barely anything, or sometimes nothing at all!
The Google code has a function called updateDimensionsAndShadow(), and that seemed to be the culprit. So after trying to modify what was in there and getting nowhere, I added a small section of code for the script to calculate the dropdown height correctly. Now, the big challenge for me here was that I’ve never had to try and find where the current “bottom of the page” was. I usually worry about the positioning of an element from the top of the page. So here is what I learned, and the code I wrote to fix the height issue.
The first thing we do is find the y-coordinate for the top of our input element of our search box, relative to the top of the document. This is done by first grabbing the offsetTop of the element which is it’s position relative to the container it’s in. We add the offsetHeight of the input because we are going to actually want the position of the bottom of the input box (that’s where the dropdown list will start).
var sf = document.getElementById('searchField');
var searchTop = sf.offsetTop + sf.offsetHeight;
Next we will cycle through all of the element’s ancestor’s (via offsetParent) and continuously add each of their offsetTop coordinates to our caclulation. This will give us the y-coordinate of the bottom of the search box relative to the very top of the document.
var sfParent = sf.offsetParent;
while (sfParent) {
searchTop += sfParent.offsetTop;
sfParent = sfParent.offsetParent;
};
Next we will get the position of the bottom of the browser’s toolbar, relative to the top of the document.
var yOffset = (window.pageYOffset) ? window.pageYOffset : document.body.scrollTop;
We can now finally calculate the max height of our search results dropdown list (refer to the diagram).
var maxSearchResultsHeight =
document.documentElement.clientHeight - (searchTop - window.pageYOffset) - searchAsYouTypeConfiguration.bottomPageMargin;
What we are saying here is “The height we have available is [the visible height in the browser] minus [the difference between the top of the element in the document and the bottom of the browser toolbar in the document] minus [the margin we want between the droplist and bottom of the browser]”.
Now the one thing I do need to fix yet is cross-browser compatibility (cough, IE, cough, cough), but the above modifications appear to be working in the real browsers.
Please let me know if you have a better way of finding the bottom of the page.
All I wanted to do was to let users share a video link from a web app I was building. I thought it would be easy. And you know what, it probably is. It’s probably easier than what I’m going to show you here. But is it simply stated somewhere in Facebook’s documentation? Not that I could find. So piecing together parts of their docs, plowing through search after search and reading a multitude of blog posts, I came up with this solution.
Quick Overview
The idea is a simple one. The web app let’s you choose a video and view it within the site. If the “share this” button is clicked, the familiar Facebook dialog will popup, letting the user add an additional message to the built-in metadata for the share. If the user has not yet logged into Facebook, they will be prompted to do so. OK, here we go.
Set Up a Facebook App
I thought that you could simply call a Facebook url with some extra information and you were all set, but all of the examples I saw included an application id as part of the call. So I needed to setup an app. For those of you that haven’t done this before, here is a quick summary.
Search for the Developer App in the search bar and install it to your Facebook account.
After you install, you will be presented with a page that allows you to configure the app. There are a lot of settings here, and many of them pertain to applications with a lot more complexity. For our purposes, we are really interested in only a few things. Fill out the basic info as desired (such as website, support email, etc.) and add a logo to spruce up the share dialog box. You will notice on the main settings page that an application id is presented for you. You need that for your share code.
One other note is that apps can have sandbox mode enabled. When in sandbox mode, only you, as the developer, can use the app. This is of course useful for testing. Don’t forget to disable the sandbox when you are ready to go live.
Writing the Code
OK, now let’s get down to it. Like I mentioned, this is quick.
I have the call to Facebook wrapped inside a jQuery click event which opens a new window. The heart of it is the URL inside of window.open().
The call starts out with FB_FEED_DIALOG_URL which is set to ‘http://www.facebook.com/dialog/feed?’.
Set your application id which you obtained earlier.
Add the URL that you want Facebook to redirect the user to after they’ve shared the link. Note that Facebook will append a ‘/?postId=xxxx’ to the URL so be prepared to handle that. I simply ignored it by telling our routing rules to trim off the question mark and everything after before handing it to the dispatcher.
The name, caption, description, source, link and picture will show up in the share dialog box.
As you may have noticed, I’m sharing a YouTube video. That also brought it’s own challenges on how to set these parameters correctly. I believe I obtained these settings from a blog somewhere, which I can’t remember (thank you to the author):
The YouTube video id is appended to each of the above constants.
The dialog looks like this:
After the user clicks the Publish button, the link is posted to their wall and they are redirected to the URL you specified.
Conclusion
I really hope this helps some of you out there as I could not figure out why something seemingly so simple was taking me forever to figure out. Also, I’m sure there are a lot of alternatives to this method — some much simpler I’m afraid. I’d be interested in hearing your solutions.
I’ve really come to love my version control setup that I’ve been using heavily over the last several months, and I wanted to share it. It’s really quite simple and it has saved me a lot of headaches. I had been using a central, cloud-based storage method for uploading my working code (Google Docs, Dropbox, etc.) so that I had access to the code wherever I was and whatever device I happened to be using.
Here is the problem, though. For example, I’m coding on my desktop and am about ready to head out the door to continue coding at the coffee shop. So I zip up all of my files, upload them to the cloud, then download them to my notebook. That’s not too horrible of a process, but how do I fit version control into this scenario? My first try ended in more steps and more headaches as I could never keep the working copies and repositories in sync.
Then I heard of Dropbox’s synced folder feature. This was the answer. Now I don’t even move or package my files at all! I simply commit them to the repository. Here’s how it works.
First, I setup Dropbox with the synced folder feature enabled on all of the devices that I intend to use.
Any files I drop into any of these folders will automatically replicate to the other folders. Next, I setup a Subversion repository in the Dropbox folder of any one of the devices, which of course then replicates.
So here I am, back on the desktop. I checkout the latest version of the project code from the repository in the Dropbox folder, make my changes, then commit those changes back to the repository. The repository is now updated on all devices.
On over to my laptop, let’s say I’ve already been working on the code from there as well. I perform a Subversion update on my working copy of the project, which grabs all of the changes I made on my desktop, then merges them into my local working copy. As usual, I make changes and commit back to the repository, and so on and so on.
One final note. It’s good to have that repository backed up, so on one of my devices, I have Amazon’s Jungle Disk installed, which backs up my Dropbox folder every night.
So as you can see, once everything is set up, I simply hop on a device of my choice, update my working copy, make changes then commit back to the repository. Done. No headaches.
I just finished troubleshooting one of my websites that was acting abnormally in Internet Explorer, and I have emerged from the battle, heavily scarred. Here is a snapshot of the files involved (the actual names of the files have been changed to protect the innocent):
It’s a simple setup. Index.php holds a form and is supported by a JavaScript file. The JavaScript file makes AJAX calls to serverSide.php, which in turn accesses a MySQL database. The JavaScript file then redirects the browser to secondPage.php and serves up the data.
The serverSide file is also accessed from secondPage through the JavaScript file. And therein lies the problem with Internet Explorer. Once index makes its call to serverSide, IE stores serverSide in its Temporary Internet Files folder. So when secondPage calls serverSide with new parameters, serverSide is retrieved from the cache folder instead of being called at the server and delivering fresh data to secondPage.
The solution was found in the php manual. There it gives the following information in the “header” article:
PHP scripts often generate dynamic content that must not be cached by the client browser or any proxy caches between the server and the client browser. Many proxies and clients can be forced to disable caching with:
<?php
header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1
header("Expires: Sat, 26 Jul 1997 05:00:00 GMT"); // Date in the past
?>
I added this code to secondPage and sure enough, IE ignored the cached version of serverSide and served up fresh data.
I hope this information will save someone else from a massive headache.
My last several posts have been following a personal project of mine, named Charity Tree, and how I am attempting to follow many of the guidelines laid out in Getting Real by 37Signals. This month I had the privilege to attend West Michigan Startup Weekend and was able to put a few of those gems of wisdom to practice, out in the field, as it were.
After the initial teams were formed on Friday night, we had time to get familiar with the project, the team members and the goals for the weekend. For my particular team, we had initially thought that we would be designing a website/web app over the weekend. So early Saturday morning, when we really got down to business, I started thinking about the coding that was going to be needed: database backend, form controls, web service functions, etc. I started asking the team some basic questions about how things would work, started to feel a bit overwhelmed, and then said to myself, “Wait! Haven’t I learned anything? I’m diving into details way too quickly”.
I guess in the excitement, and the knowledge of the impending Sunday deadline, I wanted to start cutting code immediately. But where would that have gotten us? As I look back on the results of the weekend, we wouldn’t have gotten far. At the most, we would have had some functioning code that would now have to be abandoned due to changing ideas and requirements realized.
Luckily, I remembered what I had been blogging about for the past couple of weeks, and decided to put it into practice. So one of the first things I did was follow Getting Real’s Interface First concept (I blogged about it recently). Basically, I needed to start sketching out interfaces, rough-like, to start feeling out how the web app was going to work.
The team started to brainstorm again about how they envisioned different scenarios taking place. I wrote user stories to encapsulate these scenarios. We came up with several wireframes. One is pictured here:
Now, this wireframe may not look like much, but it was interface sketches like these that really made us start to think about how the application pieces as a whole fit together, and more importantly, how processes (both business and application) were going to be carried out.
These rough sketches brought many things to light and really helped us flesh out the project. What did we have by the end of the weekend? Wireframes and mockups. It would have been nice to have some functioning code, but we would have been way off base and throwing away a lot of work. Thinking “Interface First” saved us from going down the wrong road and helped the team narrow in on what was important. The weekend was over, but now we could hit the ground running.